sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
52f928fd7852b6b2405291d0695f1e3664338b07 | # Dataset Card for Dataset Name
This dataset is perfect for fine-tuning models in conversational and text generation tasks, enhancing their ability to provide contextually relevant and coherent responses. It's invaluable for applications like chatbots and virtual assistants.
This dataset card aims to be a base template for new datasets. It has been generated using [ChatGPT, and conversational data online.]
## Dataset Details
Purpose: Fine-tuning models for conversational and text generation tasks.
Content: Diverse text inputs and responses.
Benefit: Enhances contextually relevant and coherent response generation.
Application: Ideal for chatbots, virtual assistants, and more.
Size- 100k+ rows
### Dataset Description
This dataset, comprising a diverse range of text inputs and responses, can be used to fine-tune models for conversational and text generation tasks. By training smaller models with this dataset, you can adapt them to generate more contextually relevant and coherent responses in a conversational manner.
The utility of such a dataset lies in its ability to help models understand the nuances of human language, context, and conversational flow. Through the training process, models can learn to generate text that not only mimics human conversations but can also provide meaningful, context-aware responses. This can be particularly beneficial for applications like chatbots, virtual assistants, or any system that aims to engage in natural language interactions.
- **Created by:** [CrabfishAI]
- **Language(s) (NLP):** [EN: english]
- **License:** [aapache-2.0 ]
## Uses
1. Enhancing chatbots and virtual assistants.
2. Improving content generation models.
3. Advancing customer support systems.
4. Boosting language understanding.
5. Supporting conversational AI research.
6. Innovating in education and language learning.
7. Enabling better social media content moderation.
8. Providing enhanced user experiences.
9. Enhancing personal assistant capabilities.
10. Customizing AI for specific industries or domains.
### Direct Use
1. Text Generation: Generate creative content like stories, poems, or product descriptions.
2. Conversational AI: Create chatbots or virtual assistants that engage in natural and context-aware conversations.
### Out-of-Scope Use
1. Bias and Quality: The dataset's quality and bias can impact model responses.
2. Contextual Challenges: Models may struggle with long conversations or specialized topics.
3. Generalization Issues: Difficulty in handling novel conversational scenarios.
4. Inappropriate Content: Risk of containing offensive or inappropriate content.
5. Data Sparsity: Limited data in specific domains or topics.
6. Computation Intensity: Resource-intensive for model training.
7. Privacy and Ethics: Concerns about data privacy and consent.
8. Lack of Multimodal Data: Limited to text, excluding visual or audio elements.
9. Overfitting Risk: Models may overfit to dataset patterns.
9. Maintenance and Updates: Requires ongoing updates to stay current.
## Dataset Structure
The data structure of a dataset used for conversational and text generation tasks typically includes the following components:
Input Text: This is the text provided as an initial input, such as a user's message or query in a conversation.
Response Text: The corresponding text generated by a model as a response to the input.
Metadata: Additional information about each interaction, such as timestamps, user IDs, or context identifiers, which can help in maintaining the conversational flow.
Conversations: A grouping or organization of interactions, where each conversation consists of a series of input and response pairs that maintain context.
Labels: If the dataset includes labeled data for training purposes, these labels might indicate the desired or correct responses.
Domain or Task Tags: Tags or labels that specify the domain or task associated with each interaction, helping models understand the context and purpose of the conversation.
Source Information: Details about the source of the data, which could include where and how the conversations were collected.
## Dataset Creation
Created by: CrabfishAI.
### Source Data
1. ChatGPT-3.5
2. Wikipedia
3. Other datsets
4. Google's BradAI | CrabfishAI/ptgm-chat | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
]
| 2023-11-08T12:03:30+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "conversational"]} | 2024-01-04T13:57:58+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-generation #task_categories-conversational #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
| # Dataset Card for Dataset Name
This dataset is perfect for fine-tuning models in conversational and text generation tasks, enhancing their ability to provide contextually relevant and coherent responses. It's invaluable for applications like chatbots and virtual assistants.
This dataset card aims to be a base template for new datasets. It has been generated using [ChatGPT, and conversational data online.]
## Dataset Details
Purpose: Fine-tuning models for conversational and text generation tasks.
Content: Diverse text inputs and responses.
Benefit: Enhances contextually relevant and coherent response generation.
Application: Ideal for chatbots, virtual assistants, and more.
Size- 100k+ rows
### Dataset Description
This dataset, comprising a diverse range of text inputs and responses, can be used to fine-tune models for conversational and text generation tasks. By training smaller models with this dataset, you can adapt them to generate more contextually relevant and coherent responses in a conversational manner.
The utility of such a dataset lies in its ability to help models understand the nuances of human language, context, and conversational flow. Through the training process, models can learn to generate text that not only mimics human conversations but can also provide meaningful, context-aware responses. This can be particularly beneficial for applications like chatbots, virtual assistants, or any system that aims to engage in natural language interactions.
- Created by: [CrabfishAI]
- Language(s) (NLP): [EN: english]
- License: [aapache-2.0 ]
## Uses
1. Enhancing chatbots and virtual assistants.
2. Improving content generation models.
3. Advancing customer support systems.
4. Boosting language understanding.
5. Supporting conversational AI research.
6. Innovating in education and language learning.
7. Enabling better social media content moderation.
8. Providing enhanced user experiences.
9. Enhancing personal assistant capabilities.
10. Customizing AI for specific industries or domains.
### Direct Use
1. Text Generation: Generate creative content like stories, poems, or product descriptions.
2. Conversational AI: Create chatbots or virtual assistants that engage in natural and context-aware conversations.
### Out-of-Scope Use
1. Bias and Quality: The dataset's quality and bias can impact model responses.
2. Contextual Challenges: Models may struggle with long conversations or specialized topics.
3. Generalization Issues: Difficulty in handling novel conversational scenarios.
4. Inappropriate Content: Risk of containing offensive or inappropriate content.
5. Data Sparsity: Limited data in specific domains or topics.
6. Computation Intensity: Resource-intensive for model training.
7. Privacy and Ethics: Concerns about data privacy and consent.
8. Lack of Multimodal Data: Limited to text, excluding visual or audio elements.
9. Overfitting Risk: Models may overfit to dataset patterns.
9. Maintenance and Updates: Requires ongoing updates to stay current.
## Dataset Structure
The data structure of a dataset used for conversational and text generation tasks typically includes the following components:
Input Text: This is the text provided as an initial input, such as a user's message or query in a conversation.
Response Text: The corresponding text generated by a model as a response to the input.
Metadata: Additional information about each interaction, such as timestamps, user IDs, or context identifiers, which can help in maintaining the conversational flow.
Conversations: A grouping or organization of interactions, where each conversation consists of a series of input and response pairs that maintain context.
Labels: If the dataset includes labeled data for training purposes, these labels might indicate the desired or correct responses.
Domain or Task Tags: Tags or labels that specify the domain or task associated with each interaction, helping models understand the context and purpose of the conversation.
Source Information: Details about the source of the data, which could include where and how the conversations were collected.
## Dataset Creation
Created by: CrabfishAI.
### Source Data
1. ChatGPT-3.5
2. Wikipedia
3. Other datsets
4. Google's BradAI | [
"# Dataset Card for Dataset Name\nThis dataset is perfect for fine-tuning models in conversational and text generation tasks, enhancing their ability to provide contextually relevant and coherent responses. It's invaluable for applications like chatbots and virtual assistants.\n\nThis dataset card aims to be a base template for new datasets. It has been generated using [ChatGPT, and conversational data online.]",
"## Dataset Details\nPurpose: Fine-tuning models for conversational and text generation tasks.\nContent: Diverse text inputs and responses.\nBenefit: Enhances contextually relevant and coherent response generation.\nApplication: Ideal for chatbots, virtual assistants, and more.\nSize- 100k+ rows",
"### Dataset Description\nThis dataset, comprising a diverse range of text inputs and responses, can be used to fine-tune models for conversational and text generation tasks. By training smaller models with this dataset, you can adapt them to generate more contextually relevant and coherent responses in a conversational manner.\n\nThe utility of such a dataset lies in its ability to help models understand the nuances of human language, context, and conversational flow. Through the training process, models can learn to generate text that not only mimics human conversations but can also provide meaningful, context-aware responses. This can be particularly beneficial for applications like chatbots, virtual assistants, or any system that aims to engage in natural language interactions.\n\n- Created by: [CrabfishAI]\n- Language(s) (NLP): [EN: english]\n- License: [aapache-2.0 ]",
"## Uses\n1. Enhancing chatbots and virtual assistants.\n2. Improving content generation models.\n3. Advancing customer support systems.\n4. Boosting language understanding.\n5. Supporting conversational AI research.\n6. Innovating in education and language learning.\n7. Enabling better social media content moderation.\n8. Providing enhanced user experiences.\n9. Enhancing personal assistant capabilities.\n10. Customizing AI for specific industries or domains.",
"### Direct Use\n1. Text Generation: Generate creative content like stories, poems, or product descriptions.\n2. Conversational AI: Create chatbots or virtual assistants that engage in natural and context-aware conversations.",
"### Out-of-Scope Use\n1. Bias and Quality: The dataset's quality and bias can impact model responses.\n\n2. Contextual Challenges: Models may struggle with long conversations or specialized topics.\n\n3. Generalization Issues: Difficulty in handling novel conversational scenarios.\n\n4. Inappropriate Content: Risk of containing offensive or inappropriate content.\n\n5. Data Sparsity: Limited data in specific domains or topics.\n\n6. Computation Intensity: Resource-intensive for model training.\n\n7. Privacy and Ethics: Concerns about data privacy and consent.\n\n8. Lack of Multimodal Data: Limited to text, excluding visual or audio elements.\n\n9. Overfitting Risk: Models may overfit to dataset patterns.\n\n9. Maintenance and Updates: Requires ongoing updates to stay current.",
"## Dataset Structure\n\nThe data structure of a dataset used for conversational and text generation tasks typically includes the following components:\n\nInput Text: This is the text provided as an initial input, such as a user's message or query in a conversation.\n\nResponse Text: The corresponding text generated by a model as a response to the input.\n\nMetadata: Additional information about each interaction, such as timestamps, user IDs, or context identifiers, which can help in maintaining the conversational flow.\n\nConversations: A grouping or organization of interactions, where each conversation consists of a series of input and response pairs that maintain context.\n\nLabels: If the dataset includes labeled data for training purposes, these labels might indicate the desired or correct responses.\n\nDomain or Task Tags: Tags or labels that specify the domain or task associated with each interaction, helping models understand the context and purpose of the conversation.\n\nSource Information: Details about the source of the data, which could include where and how the conversations were collected.",
"## Dataset Creation\nCreated by: CrabfishAI.",
"### Source Data\n1. ChatGPT-3.5\n2. Wikipedia\n3. Other datsets\n4. Google's BradAI"
]
| [
"TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n",
"# Dataset Card for Dataset Name\nThis dataset is perfect for fine-tuning models in conversational and text generation tasks, enhancing their ability to provide contextually relevant and coherent responses. It's invaluable for applications like chatbots and virtual assistants.\n\nThis dataset card aims to be a base template for new datasets. It has been generated using [ChatGPT, and conversational data online.]",
"## Dataset Details\nPurpose: Fine-tuning models for conversational and text generation tasks.\nContent: Diverse text inputs and responses.\nBenefit: Enhances contextually relevant and coherent response generation.\nApplication: Ideal for chatbots, virtual assistants, and more.\nSize- 100k+ rows",
"### Dataset Description\nThis dataset, comprising a diverse range of text inputs and responses, can be used to fine-tune models for conversational and text generation tasks. By training smaller models with this dataset, you can adapt them to generate more contextually relevant and coherent responses in a conversational manner.\n\nThe utility of such a dataset lies in its ability to help models understand the nuances of human language, context, and conversational flow. Through the training process, models can learn to generate text that not only mimics human conversations but can also provide meaningful, context-aware responses. This can be particularly beneficial for applications like chatbots, virtual assistants, or any system that aims to engage in natural language interactions.\n\n- Created by: [CrabfishAI]\n- Language(s) (NLP): [EN: english]\n- License: [aapache-2.0 ]",
"## Uses\n1. Enhancing chatbots and virtual assistants.\n2. Improving content generation models.\n3. Advancing customer support systems.\n4. Boosting language understanding.\n5. Supporting conversational AI research.\n6. Innovating in education and language learning.\n7. Enabling better social media content moderation.\n8. Providing enhanced user experiences.\n9. Enhancing personal assistant capabilities.\n10. Customizing AI for specific industries or domains.",
"### Direct Use\n1. Text Generation: Generate creative content like stories, poems, or product descriptions.\n2. Conversational AI: Create chatbots or virtual assistants that engage in natural and context-aware conversations.",
"### Out-of-Scope Use\n1. Bias and Quality: The dataset's quality and bias can impact model responses.\n\n2. Contextual Challenges: Models may struggle with long conversations or specialized topics.\n\n3. Generalization Issues: Difficulty in handling novel conversational scenarios.\n\n4. Inappropriate Content: Risk of containing offensive or inappropriate content.\n\n5. Data Sparsity: Limited data in specific domains or topics.\n\n6. Computation Intensity: Resource-intensive for model training.\n\n7. Privacy and Ethics: Concerns about data privacy and consent.\n\n8. Lack of Multimodal Data: Limited to text, excluding visual or audio elements.\n\n9. Overfitting Risk: Models may overfit to dataset patterns.\n\n9. Maintenance and Updates: Requires ongoing updates to stay current.",
"## Dataset Structure\n\nThe data structure of a dataset used for conversational and text generation tasks typically includes the following components:\n\nInput Text: This is the text provided as an initial input, such as a user's message or query in a conversation.\n\nResponse Text: The corresponding text generated by a model as a response to the input.\n\nMetadata: Additional information about each interaction, such as timestamps, user IDs, or context identifiers, which can help in maintaining the conversational flow.\n\nConversations: A grouping or organization of interactions, where each conversation consists of a series of input and response pairs that maintain context.\n\nLabels: If the dataset includes labeled data for training purposes, these labels might indicate the desired or correct responses.\n\nDomain or Task Tags: Tags or labels that specify the domain or task associated with each interaction, helping models understand the context and purpose of the conversation.\n\nSource Information: Details about the source of the data, which could include where and how the conversations were collected.",
"## Dataset Creation\nCreated by: CrabfishAI.",
"### Source Data\n1. ChatGPT-3.5\n2. Wikipedia\n3. Other datsets\n4. Google's BradAI"
]
| [
51,
95,
70,
198,
94,
49,
186,
230,
14,
23
]
| [
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n# Dataset Card for Dataset Name\nThis dataset is perfect for fine-tuning models in conversational and text generation tasks, enhancing their ability to provide contextually relevant and coherent responses. It's invaluable for applications like chatbots and virtual assistants.\n\nThis dataset card aims to be a base template for new datasets. It has been generated using [ChatGPT, and conversational data online.]## Dataset Details\nPurpose: Fine-tuning models for conversational and text generation tasks.\nContent: Diverse text inputs and responses.\nBenefit: Enhances contextually relevant and coherent response generation.\nApplication: Ideal for chatbots, virtual assistants, and more.\nSize- 100k+ rows### Dataset Description\nThis dataset, comprising a diverse range of text inputs and responses, can be used to fine-tune models for conversational and text generation tasks. By training smaller models with this dataset, you can adapt them to generate more contextually relevant and coherent responses in a conversational manner.\n\nThe utility of such a dataset lies in its ability to help models understand the nuances of human language, context, and conversational flow. Through the training process, models can learn to generate text that not only mimics human conversations but can also provide meaningful, context-aware responses. This can be particularly beneficial for applications like chatbots, virtual assistants, or any system that aims to engage in natural language interactions.\n\n- Created by: [CrabfishAI]\n- Language(s) (NLP): [EN: english]\n- License: [aapache-2.0 ]"
]
|
f5a0534b29b355175cadbf0bfb2c8ac43a25c047 |
# Dataset Card for Evaluation run of pankajmathur/Mistral-7B-model_45k6e2e4
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/pankajmathur/Mistral-7B-model_45k6e2e4
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [pankajmathur/Mistral-7B-model_45k6e2e4](https://huggingface.co/pankajmathur/Mistral-7B-model_45k6e2e4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_pankajmathur__Mistral-7B-model_45k6e2e4_public",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-08T12:00:55.074514](https://huggingface.co/datasets/open-llm-leaderboard/details_pankajmathur__Mistral-7B-model_45k6e2e4_public/blob/main/results_2023-11-08T12-00-55.074514.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.23184197178254598,
"acc_stderr": 0.030693965962788314,
"acc_norm": 0.23241967500614574,
"acc_norm_stderr": 0.030706854185546608,
"mc1": 0.24357405140758873,
"mc1_stderr": 0.01502635482491078,
"mc2": 0.5084657838244592,
"mc2_stderr": 0.016201328114036084
},
"harness|arc:challenge|25": {
"acc": 0.2022184300341297,
"acc_stderr": 0.011737454431872104,
"acc_norm": 0.2431740614334471,
"acc_norm_stderr": 0.01253655414458709
},
"harness|hellaswag|10": {
"acc": 0.2577175861382195,
"acc_stderr": 0.004364838000335622,
"acc_norm": 0.2508464449312886,
"acc_norm_stderr": 0.0043261434303600976
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.17,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.17,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.18518518518518517,
"acc_stderr": 0.03355677216313142,
"acc_norm": 0.18518518518518517,
"acc_norm_stderr": 0.03355677216313142
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17763157894736842,
"acc_stderr": 0.031103182383123398,
"acc_norm": 0.17763157894736842,
"acc_norm_stderr": 0.031103182383123398
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.21509433962264152,
"acc_stderr": 0.02528839450289137,
"acc_norm": 0.21509433962264152,
"acc_norm_stderr": 0.02528839450289137
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909284,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909284
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.18,
"acc_stderr": 0.03861229196653695,
"acc_norm": 0.18,
"acc_norm_stderr": 0.03861229196653695
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.20809248554913296,
"acc_stderr": 0.030952890217749874,
"acc_norm": 0.20809248554913296,
"acc_norm_stderr": 0.030952890217749874
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237654,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.26382978723404255,
"acc_stderr": 0.028809989854102973,
"acc_norm": 0.26382978723404255,
"acc_norm_stderr": 0.028809989854102973
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.23684210526315788,
"acc_stderr": 0.039994238792813365,
"acc_norm": 0.23684210526315788,
"acc_norm_stderr": 0.039994238792813365
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.23448275862068965,
"acc_stderr": 0.035306258743465914,
"acc_norm": 0.23448275862068965,
"acc_norm_stderr": 0.035306258743465914
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.24867724867724866,
"acc_stderr": 0.022261817692400175,
"acc_norm": 0.24867724867724866,
"acc_norm_stderr": 0.022261817692400175
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.30158730158730157,
"acc_stderr": 0.04104947269903394,
"acc_norm": 0.30158730158730157,
"acc_norm_stderr": 0.04104947269903394
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.1774193548387097,
"acc_stderr": 0.02173254068932927,
"acc_norm": 0.1774193548387097,
"acc_norm_stderr": 0.02173254068932927
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.15270935960591134,
"acc_stderr": 0.02530890453938062,
"acc_norm": 0.15270935960591134,
"acc_norm_stderr": 0.02530890453938062
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.27,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.27,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.17676767676767677,
"acc_stderr": 0.027178752639044915,
"acc_norm": 0.17676767676767677,
"acc_norm_stderr": 0.027178752639044915
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.19689119170984457,
"acc_stderr": 0.028697873971860664,
"acc_norm": 0.19689119170984457,
"acc_norm_stderr": 0.028697873971860664
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.20512820512820512,
"acc_stderr": 0.02047323317355198,
"acc_norm": 0.20512820512820512,
"acc_norm_stderr": 0.02047323317355198
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2074074074074074,
"acc_stderr": 0.024720713193952165,
"acc_norm": 0.2074074074074074,
"acc_norm_stderr": 0.024720713193952165
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.21008403361344538,
"acc_stderr": 0.026461398717471874,
"acc_norm": 0.21008403361344538,
"acc_norm_stderr": 0.026461398717471874
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.18543046357615894,
"acc_stderr": 0.03173284384294285,
"acc_norm": 0.18543046357615894,
"acc_norm_stderr": 0.03173284384294285
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.1926605504587156,
"acc_stderr": 0.016909276884936094,
"acc_norm": 0.1926605504587156,
"acc_norm_stderr": 0.016909276884936094
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.1527777777777778,
"acc_stderr": 0.024536326026134224,
"acc_norm": 0.1527777777777778,
"acc_norm_stderr": 0.024536326026134224
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.270042194092827,
"acc_stderr": 0.028900721906293426,
"acc_norm": 0.270042194092827,
"acc_norm_stderr": 0.028900721906293426
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.31390134529147984,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.31390134529147984,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2595419847328244,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.2595419847328244,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2396694214876033,
"acc_stderr": 0.03896878985070417,
"acc_norm": 0.2396694214876033,
"acc_norm_stderr": 0.03896878985070417
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.042365112580946336,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.042365112580946336
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.22085889570552147,
"acc_stderr": 0.032591773927421776,
"acc_norm": 0.22085889570552147,
"acc_norm_stderr": 0.032591773927421776
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.33035714285714285,
"acc_stderr": 0.04464285714285713,
"acc_norm": 0.33035714285714285,
"acc_norm_stderr": 0.04464285714285713
},
"harness|hendrycksTest-management|5": {
"acc": 0.17475728155339806,
"acc_stderr": 0.037601780060266224,
"acc_norm": 0.17475728155339806,
"acc_norm_stderr": 0.037601780060266224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2905982905982906,
"acc_stderr": 0.02974504857267404,
"acc_norm": 0.2905982905982906,
"acc_norm_stderr": 0.02974504857267404
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.23754789272030652,
"acc_stderr": 0.015218733046150193,
"acc_norm": 0.23754789272030652,
"acc_norm_stderr": 0.015218733046150193
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24855491329479767,
"acc_stderr": 0.023267528432100174,
"acc_norm": 0.24855491329479767,
"acc_norm_stderr": 0.023267528432100174
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23798882681564246,
"acc_stderr": 0.014242630070574915,
"acc_norm": 0.23798882681564246,
"acc_norm_stderr": 0.014242630070574915
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.22549019607843138,
"acc_stderr": 0.023929155517351284,
"acc_norm": 0.22549019607843138,
"acc_norm_stderr": 0.023929155517351284
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.1864951768488746,
"acc_stderr": 0.02212243977248077,
"acc_norm": 0.1864951768488746,
"acc_norm_stderr": 0.02212243977248077
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.21604938271604937,
"acc_stderr": 0.022899162918445806,
"acc_norm": 0.21604938271604937,
"acc_norm_stderr": 0.022899162918445806
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.23404255319148937,
"acc_stderr": 0.025257861359432414,
"acc_norm": 0.23404255319148937,
"acc_norm_stderr": 0.025257861359432414
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2457627118644068,
"acc_stderr": 0.010996156635142692,
"acc_norm": 0.2457627118644068,
"acc_norm_stderr": 0.010996156635142692
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.18382352941176472,
"acc_stderr": 0.023529242185193106,
"acc_norm": 0.18382352941176472,
"acc_norm_stderr": 0.023529242185193106
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.25,
"acc_stderr": 0.01751781884501444,
"acc_norm": 0.25,
"acc_norm_stderr": 0.01751781884501444
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03955932861795833,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03955932861795833
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.18775510204081633,
"acc_stderr": 0.02500025603954621,
"acc_norm": 0.18775510204081633,
"acc_norm_stderr": 0.02500025603954621
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24378109452736318,
"acc_stderr": 0.03036049015401465,
"acc_norm": 0.24378109452736318,
"acc_norm_stderr": 0.03036049015401465
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-virology|5": {
"acc": 0.28313253012048195,
"acc_stderr": 0.03507295431370518,
"acc_norm": 0.28313253012048195,
"acc_norm_stderr": 0.03507295431370518
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.3216374269005848,
"acc_stderr": 0.03582529442573122,
"acc_norm": 0.3216374269005848,
"acc_norm_stderr": 0.03582529442573122
},
"harness|truthfulqa:mc|0": {
"mc1": 0.24357405140758873,
"mc1_stderr": 0.01502635482491078,
"mc2": 0.5084657838244592,
"mc2_stderr": 0.016201328114036084
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_pankajmathur__Mistral-7B-model_45k6e2e4 | [
"region:us"
]
| 2023-11-08T12:03:34+00:00 | {"pretty_name": "Evaluation run of pankajmathur/Mistral-7B-model_45k6e2e4", "dataset_summary": "Dataset automatically created during the evaluation run of model [pankajmathur/Mistral-7B-model_45k6e2e4](https://huggingface.co/pankajmathur/Mistral-7B-model_45k6e2e4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_pankajmathur__Mistral-7B-model_45k6e2e4_public\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-08T12:00:55.074514](https://huggingface.co/datasets/open-llm-leaderboard/details_pankajmathur__Mistral-7B-model_45k6e2e4_public/blob/main/results_2023-11-08T12-00-55.074514.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.23184197178254598,\n \"acc_stderr\": 0.030693965962788314,\n \"acc_norm\": 0.23241967500614574,\n \"acc_norm_stderr\": 0.030706854185546608,\n \"mc1\": 0.24357405140758873,\n \"mc1_stderr\": 0.01502635482491078,\n \"mc2\": 0.5084657838244592,\n \"mc2_stderr\": 0.016201328114036084\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.2022184300341297,\n \"acc_stderr\": 0.011737454431872104,\n \"acc_norm\": 0.2431740614334471,\n \"acc_norm_stderr\": 0.01253655414458709\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.2577175861382195,\n \"acc_stderr\": 0.004364838000335622,\n \"acc_norm\": 0.2508464449312886,\n \"acc_norm_stderr\": 0.0043261434303600976\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.17,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.17,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.18518518518518517,\n \"acc_stderr\": 0.03355677216313142,\n \"acc_norm\": 0.18518518518518517,\n \"acc_norm_stderr\": 0.03355677216313142\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123398,\n \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123398\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.21509433962264152,\n \"acc_stderr\": 0.02528839450289137,\n \"acc_norm\": 0.21509433962264152,\n \"acc_norm_stderr\": 0.02528839450289137\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909284,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909284\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.03861229196653695,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.03861229196653695\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.20809248554913296,\n \"acc_stderr\": 0.030952890217749874,\n \"acc_norm\": 0.20809248554913296,\n \"acc_norm_stderr\": 0.030952890217749874\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.26382978723404255,\n \"acc_stderr\": 0.028809989854102973,\n \"acc_norm\": 0.26382978723404255,\n \"acc_norm_stderr\": 0.028809989854102973\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n \"acc_stderr\": 0.039994238792813365,\n \"acc_norm\": 0.23684210526315788,\n \"acc_norm_stderr\": 0.039994238792813365\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.23448275862068965,\n \"acc_stderr\": 0.035306258743465914,\n \"acc_norm\": 0.23448275862068965,\n \"acc_norm_stderr\": 0.035306258743465914\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.24867724867724866,\n \"acc_stderr\": 0.022261817692400175,\n \"acc_norm\": 0.24867724867724866,\n \"acc_norm_stderr\": 0.022261817692400175\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.30158730158730157,\n \"acc_stderr\": 0.04104947269903394,\n \"acc_norm\": 0.30158730158730157,\n \"acc_norm_stderr\": 0.04104947269903394\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.1774193548387097,\n \"acc_stderr\": 0.02173254068932927,\n \"acc_norm\": 0.1774193548387097,\n \"acc_norm_stderr\": 0.02173254068932927\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.15270935960591134,\n \"acc_stderr\": 0.02530890453938062,\n \"acc_norm\": 0.15270935960591134,\n \"acc_norm_stderr\": 0.02530890453938062\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.17676767676767677,\n \"acc_stderr\": 0.027178752639044915,\n \"acc_norm\": 0.17676767676767677,\n \"acc_norm_stderr\": 0.027178752639044915\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.19689119170984457,\n \"acc_stderr\": 0.028697873971860664,\n \"acc_norm\": 0.19689119170984457,\n \"acc_norm_stderr\": 0.028697873971860664\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.20512820512820512,\n \"acc_stderr\": 0.02047323317355198,\n \"acc_norm\": 0.20512820512820512,\n \"acc_norm_stderr\": 0.02047323317355198\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2074074074074074,\n \"acc_stderr\": 0.024720713193952165,\n \"acc_norm\": 0.2074074074074074,\n \"acc_norm_stderr\": 0.024720713193952165\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.21008403361344538,\n \"acc_stderr\": 0.026461398717471874,\n \"acc_norm\": 0.21008403361344538,\n \"acc_norm_stderr\": 0.026461398717471874\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.18543046357615894,\n \"acc_stderr\": 0.03173284384294285,\n \"acc_norm\": 0.18543046357615894,\n \"acc_norm_stderr\": 0.03173284384294285\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.1926605504587156,\n \"acc_stderr\": 0.016909276884936094,\n \"acc_norm\": 0.1926605504587156,\n \"acc_norm_stderr\": 0.016909276884936094\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.1527777777777778,\n \"acc_stderr\": 0.024536326026134224,\n \"acc_norm\": 0.1527777777777778,\n \"acc_norm_stderr\": 0.024536326026134224\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.270042194092827,\n \"acc_stderr\": 0.028900721906293426,\n \"acc_norm\": 0.270042194092827,\n \"acc_norm_stderr\": 0.028900721906293426\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.31390134529147984,\n \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.31390134529147984,\n \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.2595419847328244,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.2595419847328244,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.2396694214876033,\n \"acc_stderr\": 0.03896878985070417,\n \"acc_norm\": 0.2396694214876033,\n \"acc_norm_stderr\": 0.03896878985070417\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.22085889570552147,\n \"acc_stderr\": 0.032591773927421776,\n \"acc_norm\": 0.22085889570552147,\n \"acc_norm_stderr\": 0.032591773927421776\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.33035714285714285,\n \"acc_stderr\": 0.04464285714285713,\n \"acc_norm\": 0.33035714285714285,\n \"acc_norm_stderr\": 0.04464285714285713\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2905982905982906,\n \"acc_stderr\": 0.02974504857267404,\n \"acc_norm\": 0.2905982905982906,\n \"acc_norm_stderr\": 0.02974504857267404\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23754789272030652,\n \"acc_stderr\": 0.015218733046150193,\n \"acc_norm\": 0.23754789272030652,\n \"acc_norm_stderr\": 0.015218733046150193\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.24855491329479767,\n \"acc_stderr\": 0.023267528432100174,\n \"acc_norm\": 0.24855491329479767,\n \"acc_norm_stderr\": 0.023267528432100174\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.22549019607843138,\n \"acc_stderr\": 0.023929155517351284,\n \"acc_norm\": 0.22549019607843138,\n \"acc_norm_stderr\": 0.023929155517351284\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.1864951768488746,\n \"acc_stderr\": 0.02212243977248077,\n \"acc_norm\": 0.1864951768488746,\n \"acc_norm_stderr\": 0.02212243977248077\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.21604938271604937,\n \"acc_stderr\": 0.022899162918445806,\n \"acc_norm\": 0.21604938271604937,\n \"acc_norm_stderr\": 0.022899162918445806\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.23404255319148937,\n \"acc_stderr\": 0.025257861359432414,\n \"acc_norm\": 0.23404255319148937,\n \"acc_norm_stderr\": 0.025257861359432414\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2457627118644068,\n \"acc_stderr\": 0.010996156635142692,\n \"acc_norm\": 0.2457627118644068,\n \"acc_norm_stderr\": 0.010996156635142692\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.18382352941176472,\n \"acc_stderr\": 0.023529242185193106,\n \"acc_norm\": 0.18382352941176472,\n \"acc_norm_stderr\": 0.023529242185193106\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.01751781884501444,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.01751781884501444\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03955932861795833,\n \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03955932861795833\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.18775510204081633,\n \"acc_stderr\": 0.02500025603954621,\n \"acc_norm\": 0.18775510204081633,\n \"acc_norm_stderr\": 0.02500025603954621\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24378109452736318,\n \"acc_stderr\": 0.03036049015401465,\n \"acc_norm\": 0.24378109452736318,\n \"acc_norm_stderr\": 0.03036049015401465\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.28313253012048195,\n \"acc_stderr\": 0.03507295431370518,\n \"acc_norm\": 0.28313253012048195,\n \"acc_norm_stderr\": 0.03507295431370518\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.3216374269005848,\n \"acc_stderr\": 0.03582529442573122,\n \"acc_norm\": 0.3216374269005848,\n \"acc_norm_stderr\": 0.03582529442573122\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.24357405140758873,\n \"mc1_stderr\": 0.01502635482491078,\n \"mc2\": 0.5084657838244592,\n \"mc2_stderr\": 0.016201328114036084\n }\n}\n```", "repo_url": "https://huggingface.co/pankajmathur/Mistral-7B-model_45k6e2e4", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|arc:challenge|25_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hellaswag|10_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-08T12-00-55.074514.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-08T12-00-55.074514.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_08T12_00_55.074514", "path": ["results_2023-11-08T12-00-55.074514.parquet"]}, {"split": "latest", "path": ["results_2023-11-08T12-00-55.074514.parquet"]}]}]} | 2023-11-08T12:04:33+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of pankajmathur/Mistral-7B-model_45k6e2e4
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model pankajmathur/Mistral-7B-model_45k6e2e4 on the Open LLM Leaderboard.
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-08T12:00:55.074514(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of pankajmathur/Mistral-7B-model_45k6e2e4",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model pankajmathur/Mistral-7B-model_45k6e2e4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-08T12:00:55.074514(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of pankajmathur/Mistral-7B-model_45k6e2e4",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model pankajmathur/Mistral-7B-model_45k6e2e4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-08T12:00:55.074514(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
28,
31,
177,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of pankajmathur/Mistral-7B-model_45k6e2e4## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model pankajmathur/Mistral-7B-model_45k6e2e4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-08T12:00:55.074514(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
2f99fd6a563d530853a3cb8971524f8ad835f80c |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | Chhabi/Nepali-Health-QA | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:ne",
"license:apache-2.0",
"health",
"question-answer",
"nepali",
"region:us"
]
| 2023-11-08T12:17:35+00:00 | {"language": ["ne"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"], "pretty_name": "Nepali-Health-QA", "tags": ["health", "question-answer", "nepali"]} | 2023-11-08T12:57:55+00:00 | []
| [
"ne"
]
| TAGS
#task_categories-question-answering #size_categories-1K<n<10K #language-Nepali (macrolanguage) #license-apache-2.0 #health #question-answer #nepali #region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Nepali (macrolanguage) #license-apache-2.0 #health #question-answer #nepali #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
59,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Nepali (macrolanguage) #license-apache-2.0 #health #question-answer #nepali #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
2e6be1fef13ee707e57c51c3f01250ea7d3ac4b1 | # Dataset Card for "bw_spec_cls_80_05"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_05 | [
"region:us"
]
| 2023-11-08T12:21:01+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "11677", "1": "11679", "2": "11681", "3": "11682", "4": "11683", "5": "11763", "6": "11764", "7": "11765", "8": "11766", "9": "11767", "10": "11768", "11": "11769", "12": "11770", "13": "11771", "14": "11772", "15": "11773", "16": "11774", "17": "11775", "18": "11776", "19": "11777", "20": "11778", "21": "11779", "22": "11780", "23": "11781", "24": "11782", "25": "11783", "26": "11784", "27": "11785", "28": "11786", "29": "11787", "30": "11788", "31": "11789", "32": "11790", "33": "11791", "34": "11792", "35": "11793", "36": "11794", "37": "11795", "38": "11818", "39": "11839", "40": "11861", "41": "11862", "42": "11867", "43": "11868", "44": "11916", "45": "11917", "46": "11918", "47": "11919", "48": "11920", "49": "11921", "50": "11922", "51": "11933", "52": "11937", "53": "11942", "54": "11946", "55": "11947", "56": "11951", "57": "12109", "58": "12146", "59": "12147", "60": "12173", "61": "12174", "62": "12189", "63": "12346", "64": "12348", "65": "12349", "66": "12350", "67": "12351", "68": "12352", "69": "12353", "70": "12355", "71": "12376", "72": "12394", "73": "12530", "74": "12531", "75": "12532", "76": "12537", "77": "12551", "78": "12552", "79": "12654"}}}}], "splits": [{"name": "train", "num_bytes": 85796400.0, "num_examples": 1600}], "download_size": 86446850, "dataset_size": 85796400.0}} | 2023-11-08T12:21:10+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_05"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_05\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_05\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_05\"\n\nMore Information needed"
]
|
db39f35aef81b0336c3c47f712c98eff35660878 | # Dataset Card for "mnli_anli_fevernli_wanli_lingnli_xnli_train"
Train data in a harmonized format for multiple NLI datasets. | MoritzLaurer/mnli_anli_fevernli_wanli_lingnli_xnli_train | [
"region:us"
]
| 2023-11-08T12:32:53+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "mnli", "path": "data/mnli-*"}, {"split": "fevernli", "path": "data/fevernli-*"}, {"split": "anli", "path": "data/anli-*"}, {"split": "wanli", "path": "data/wanli-*"}, {"split": "lingnli", "path": "data/lingnli-*"}, {"split": "xnli", "path": "data/xnli-*"}]}], "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "language", "dtype": "string"}], "splits": [{"name": "mnli", "num_bytes": 75405050, "num_examples": 392702}, {"name": "fevernli", "num_bytes": 76336755, "num_examples": 196805}, {"name": "anli", "num_bytes": 64930916, "num_examples": 162865}, {"name": "wanli", "num_bytes": 17409074, "num_examples": 102885}, {"name": "lingnli", "num_bytes": 5868113, "num_examples": 29985}, {"name": "xnli", "num_bytes": 9825139, "num_examples": 37350}], "download_size": 0, "dataset_size": 249775047}} | 2023-11-08T13:00:13+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mnli_anli_fevernli_wanli_lingnli_xnli_train"
Train data in a harmonized format for multiple NLI datasets. | [
"# Dataset Card for \"mnli_anli_fevernli_wanli_lingnli_xnli_train\"\n\nTrain data in a harmonized format for multiple NLI datasets."
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mnli_anli_fevernli_wanli_lingnli_xnli_train\"\n\nTrain data in a harmonized format for multiple NLI datasets."
]
| [
6,
45
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mnli_anli_fevernli_wanli_lingnli_xnli_train\"\n\nTrain data in a harmonized format for multiple NLI datasets."
]
|
5730a87de11f3d54709719263e31b7b206feedac | # Dataset Card for "mnli_anli_fevernli_wanli_lingnli_xnli_test"
Test data in a harmonized format for multiple NLI datasets. | MoritzLaurer/mnli_anli_fevernli_wanli_lingnli_xnli_test | [
"region:us"
]
| 2023-11-08T12:37:30+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "mnli_m", "path": "data/mnli_m-*"}, {"split": "mnli_mm", "path": "data/mnli_mm-*"}, {"split": "fevernli", "path": "data/fevernli-*"}, {"split": "anli_r1", "path": "data/anli_r1-*"}, {"split": "anli_r2", "path": "data/anli_r2-*"}, {"split": "anli_r3", "path": "data/anli_r3-*"}, {"split": "wanli", "path": "data/wanli-*"}, {"split": "lingnli", "path": "data/lingnli-*"}, {"split": "xnli_ar", "path": "data/xnli_ar-*"}, {"split": "xnli_bg", "path": "data/xnli_bg-*"}, {"split": "xnli_de", "path": "data/xnli_de-*"}, {"split": "xnli_el", "path": "data/xnli_el-*"}, {"split": "xnli_en", "path": "data/xnli_en-*"}, {"split": "xnli_es", "path": "data/xnli_es-*"}, {"split": "xnli_fr", "path": "data/xnli_fr-*"}, {"split": "xnli_hi", "path": "data/xnli_hi-*"}, {"split": "xnli_ru", "path": "data/xnli_ru-*"}, {"split": "xnli_sw", "path": "data/xnli_sw-*"}, {"split": "xnli_th", "path": "data/xnli_th-*"}, {"split": "xnli_tr", "path": "data/xnli_tr-*"}, {"split": "xnli_ur", "path": "data/xnli_ur-*"}, {"split": "xnli_vi", "path": "data/xnli_vi-*"}, {"split": "xnli_zh", "path": "data/xnli_zh-*"}]}], "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "mnli_m", "num_bytes": 1794523, "num_examples": 9815}, {"name": "mnli_mm", "num_bytes": 1909903, "num_examples": 9832}, {"name": "fevernli", "num_bytes": 6968676, "num_examples": 19652}, {"name": "anli_r1", "num_bytes": 405400, "num_examples": 1000}, {"name": "anli_r2", "num_bytes": 405263, "num_examples": 1000}, {"name": "anli_r3", "num_bytes": 468098, "num_examples": 1200}, {"name": "wanli", "num_bytes": 812904, "num_examples": 5000}, {"name": "lingnli", "num_bytes": 943264, "num_examples": 4893}, {"name": "xnli_ar", "num_bytes": 1294553, "num_examples": 5010}, {"name": "xnli_bg", "num_bytes": 1573034, "num_examples": 5010}, {"name": "xnli_de", "num_bytes": 996488, "num_examples": 5010}, {"name": "xnli_el", "num_bytes": 1704785, "num_examples": 5010}, {"name": "xnli_en", "num_bytes": 875134, "num_examples": 5010}, {"name": "xnli_es", "num_bytes": 969813, "num_examples": 5010}, {"name": "xnli_fr", "num_bytes": 1029239, "num_examples": 5010}, {"name": "xnli_hi", "num_bytes": 2073073, "num_examples": 5010}, {"name": "xnli_ru", "num_bytes": 1603466, "num_examples": 5010}, {"name": "xnli_sw", "num_bytes": 871651, "num_examples": 5010}, {"name": "xnli_th", "num_bytes": 2147015, "num_examples": 5010}, {"name": "xnli_tr", "num_bytes": 934934, "num_examples": 5010}, {"name": "xnli_ur", "num_bytes": 1416241, "num_examples": 5010}, {"name": "xnli_vi", "num_bytes": 1190217, "num_examples": 5010}, {"name": "xnli_zh", "num_bytes": 777929, "num_examples": 5010}], "download_size": 13735153, "dataset_size": 33165603}} | 2023-11-08T12:59:43+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mnli_anli_fevernli_wanli_lingnli_xnli_test"
Test data in a harmonized format for multiple NLI datasets. | [
"# Dataset Card for \"mnli_anli_fevernli_wanli_lingnli_xnli_test\"\n\nTest data in a harmonized format for multiple NLI datasets."
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mnli_anli_fevernli_wanli_lingnli_xnli_test\"\n\nTest data in a harmonized format for multiple NLI datasets."
]
| [
6,
44
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mnli_anli_fevernli_wanli_lingnli_xnli_test\"\n\nTest data in a harmonized format for multiple NLI datasets."
]
|
90a08babb94ae45374462b89d1ddbdecbf65a649 |
# OffMix-3L: A Novel Code-Mixed Dataset in Bangla-English-Hindi for Offensive Language Identification
**Publication**: *The 11th International Workshop on Natural Language Processing for Social Media (SocialNLP) under AACL-2023.*
**Read in [arXiv](https://arxiv.org/pdf/2310.18387.pdf)**
---
## 📝 Citation
When using the OffMix-3L dataset, please cite the following:
```bibtex
@article{goswami2023offmix,
title={OffMix-3L: A Novel Code-Mixed Dataset in Bangla-English-Hindi for Offensive Language Identification},
author={Goswami, Dhiman and Raihan, Md Nishat and Mahmud, Antara and Anstasopoulos, Antonios and Zampieri, Marcos},
journal={arXiv preprint arXiv:2310.18387},
year={2023}
}
```
---
## 📖 Introduction
Code-mixing is a well-studied linguistic phenomenon when two or more languages are mixed in text or speech. Several datasets have been built with the goal of training computational models for code-mixing. Although it is very common to observe code-mixing with multiple languages, most datasets available contain code-mixed between only two languages. In this paper, we introduce OffMix-3L, a novel dataset for sentiment analysis containing code-mixed data between three languages: Bangla, English, and Hindi.
---
## 📊 Dataset Details
We introduce **OffMix-3L**, a novel three-language code-mixed test dataset with gold standard labels in Bangla-Hindi-English for the task of Sentiment Analysis, containing 1,001 instances.
> We are presenting this dataset exclusively as a test set due to the unique and specialized nature of the task. Such data is very difficult to gather and requires significant expertise to access. The size of the dataset, while limiting for training purposes, offers a high-quality testing environment with gold-standard labels that can serve as a benchmark in this domain.
---
## 📈 Dataset Statistics
| | **All** | **Bangla** | **English** | **Hindi** | **Other** |
|-------------------|---------|------------|-------------|-----------|-----------|
| Tokens | 87,190 | 31,228 | 6,690 | 14,694 | 34,578 |
| Types | 18,787 | 7,714 | 1,135 | 1,413 | 8,645 |
| Max. in instance | 173 | 62 | 20 | 47 | 93 |
| Min. in instance | 41 | 4 | 3 | 2 | 8 |
| Avg | 87.10 | 31.20 | 6.68 | 14.68 | 34.54 |
| Std Dev | 20.58 | 8.60 | 3.05 | 5.74 | 10.98 |
*OffMix-3L Data Card. The row "Avg" represents the average number of tokens with its standard deviation in row "Std Dev".*
---
## 📉 Results
| Models | F1 Score |
|----------------|----------|
| BanglishBERT | **0.68** |
| BERT | 0.66 |
| mBERT | 0.63 |
| HingBERT | 0.60 |
| MuRIL | 0.60 |
| HateBERT | 0.60 |
| fBERT | 0.58 |
| roBERTa | 0.58 |
| XLM-R | 0.57 |
| DistilBERT | 0.57 |
| GPT 3.5 Turbo | 0.57 |
| BanglaBERT | 0.54 |
| IndicBERT | 0.55 |
| HindiBERT | 0.43 |
*Weighted F-1 score for different models: training on synthetic and tested on natural data (OffMix-3L).*
| md-nishat-008/OffMix-3L | [
"license:cc-by-nd-4.0",
"arxiv:2310.18387",
"region:us"
]
| 2023-11-08T12:55:53+00:00 | {"license": "cc-by-nd-4.0"} | 2023-11-08T12:56:32+00:00 | [
"2310.18387"
]
| []
| TAGS
#license-cc-by-nd-4.0 #arxiv-2310.18387 #region-us
| OffMix-3L: A Novel Code-Mixed Dataset in Bangla-English-Hindi for Offensive Language Identification
===================================================================================================
Publication: *The 11th International Workshop on Natural Language Processing for Social Media (SocialNLP) under AACL-2023.*
Read in arXiv
---
Citation
--------
When using the OffMix-3L dataset, please cite the following:
---
Introduction
------------
Code-mixing is a well-studied linguistic phenomenon when two or more languages are mixed in text or speech. Several datasets have been built with the goal of training computational models for code-mixing. Although it is very common to observe code-mixing with multiple languages, most datasets available contain code-mixed between only two languages. In this paper, we introduce OffMix-3L, a novel dataset for sentiment analysis containing code-mixed data between three languages: Bangla, English, and Hindi.
---
Dataset Details
---------------
We introduce OffMix-3L, a novel three-language code-mixed test dataset with gold standard labels in Bangla-Hindi-English for the task of Sentiment Analysis, containing 1,001 instances.
>
> We are presenting this dataset exclusively as a test set due to the unique and specialized nature of the task. Such data is very difficult to gather and requires significant expertise to access. The size of the dataset, while limiting for training purposes, offers a high-quality testing environment with gold-standard labels that can serve as a benchmark in this domain.
>
>
>
---
Dataset Statistics
------------------
*OffMix-3L Data Card. The row "Avg" represents the average number of tokens with its standard deviation in row "Std Dev".*
---
Results
-------
*Weighted F-1 score for different models: training on synthetic and tested on natural data (OffMix-3L).*
| []
| [
"TAGS\n#license-cc-by-nd-4.0 #arxiv-2310.18387 #region-us \n"
]
| [
26
]
| [
"passage: TAGS\n#license-cc-by-nd-4.0 #arxiv-2310.18387 #region-us \n"
]
|
14bdc51f4fb0363500a28393e63667c5aa09ffdb | # Dataset Card for "bw_spec_cls_80_06"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_06 | [
"region:us"
]
| 2023-11-08T13:06:13+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "12737", "1": "12985", "2": "12986", "3": "13191", "4": "13197", "5": "13199", "6": "13218", "7": "13220", "8": "13325", "9": "13362", "10": "13378", "11": "13537", "12": "13538", "13": "13539", "14": "13540", "15": "13561", "16": "13562", "17": "13566", "18": "13571", "19": "13596", "20": "13668", "21": "13670", "22": "13735", "23": "13747", "24": "13748", "25": "13749", "26": "13767", "27": "13768", "28": "13804", "29": "13927", "30": "13928", "31": "13929", "32": "13930", "33": "14063", "34": "14208", "35": "14344", "36": "14358", "37": "14386", "38": "14538", "39": "14539", "40": "14541", "41": "14542", "42": "14568", "43": "14569", "44": "14570", "45": "14571", "46": "14572", "47": "14576", "48": "14577", "49": "14578", "50": "14579", "51": "14580", "52": "14581", "53": "14583", "54": "14584", "55": "14585", "56": "14586", "57": "14588", "58": "14589", "59": "14590", "60": "14661", "61": "14684", "62": "14693", "63": "14809", "64": "15210", "65": "15464", "66": "15469", "67": "15471", "68": "15475", "69": "15476", "70": "15487", "71": "15488", "72": "15540", "73": "15541", "74": "15542", "75": "15543", "76": "15625", "77": "15880", "78": "16095", "79": "16155"}}}}], "splits": [{"name": "train", "num_bytes": 89753408.0, "num_examples": 1600}], "download_size": 89234668, "dataset_size": 89753408.0}} | 2023-11-08T13:06:17+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_06"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_06\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_06\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_06\"\n\nMore Information needed"
]
|
f87f39e74819d716b3db2f658408cf49aa232e61 |
# Introduction
The following is a dataset which consists of some chain of density summaries which we generated using GPT-4. The approach is slightly modified to account for GPT-4 timeouts with some additional validation that we added using the [Instructor](https://github.com/jxnl/instructor) Library.
We wrote a short blog about how we generated this date [here](https://jxnl.github.io/instructor/blog/2023/11/05/better-summaries-by-finetuning-chain-of-density/#results-and-benchmarks)
Here's a quick summary of the individual files that we have
1. `summarization_20`, `summarization_50` and `summarization_all` are the respective `.jsonl` files that we used to fine-tune our models. They contain 20, 50 and 76 examples respectively.
2. `test.csv`: This is a randomly selected group of 100 test articles that were sampled from the original `griffin/chain-of-density` article that was used to create our training sets with GPT-4.
3. `train.csv`: This is a randomly selected group of 20 test articles that were sampled from the original `griffin/chain-of-density` article which were not provided to our fine-tuned models. These were then used to evaluate their quality and performance.
4, `validation-summaries`: These are the summaries generated by `GPT-4` on the test set. We include the following fields:
- `text`: The original article that was summarized
- `model`: This has a single value of `GPT-4`
- `Summary 1` : The first summary created
- `Summary 2` : The second rewritten summary
- `Summary 3` : The third rewritten summary
- `Summary 4` : The fourth rewritten summary
- `time` : The time taken for the entire chain of density to be created
5. `vanilla_35.csv`: This contains the summaries generated by a vanilla GPT 3.5 model that was prompted to generate an entity dense summary.
6. `results.csv` : These are the summaries generated by the individual fine-tuned models. We include the following fields
- `Article`: The original article that was summarized
- `model`: This is either `fine-tuned-20`, `fine-tuned-50` or `fine-tuned-76` which represent our GPT3.5 model that was fine tuned on 20, 50 or 76 examples respectively
- `Summary`: The summary generated by the fine-tuned model
- `Time` : This was the amount of time it took for the summary to be generated
| ivanleomk/gpt4-chain-of-density | [
"task_categories:summarization",
"language:en",
"license:mit",
"region:us"
]
| 2023-11-08T13:13:41+00:00 | {"language": ["en"], "license": "mit", "task_categories": ["summarization"]} | 2023-11-12T14:12:28+00:00 | []
| [
"en"
]
| TAGS
#task_categories-summarization #language-English #license-mit #region-us
|
# Introduction
The following is a dataset which consists of some chain of density summaries which we generated using GPT-4. The approach is slightly modified to account for GPT-4 timeouts with some additional validation that we added using the Instructor Library.
We wrote a short blog about how we generated this date here
Here's a quick summary of the individual files that we have
1. 'summarization_20', 'summarization_50' and 'summarization_all' are the respective '.jsonl' files that we used to fine-tune our models. They contain 20, 50 and 76 examples respectively.
2. 'URL': This is a randomly selected group of 100 test articles that were sampled from the original 'griffin/chain-of-density' article that was used to create our training sets with GPT-4.
3. 'URL': This is a randomly selected group of 20 test articles that were sampled from the original 'griffin/chain-of-density' article which were not provided to our fine-tuned models. These were then used to evaluate their quality and performance.
4, 'validation-summaries': These are the summaries generated by 'GPT-4' on the test set. We include the following fields:
- 'text': The original article that was summarized
- 'model': This has a single value of 'GPT-4'
- 'Summary 1' : The first summary created
- 'Summary 2' : The second rewritten summary
- 'Summary 3' : The third rewritten summary
- 'Summary 4' : The fourth rewritten summary
- 'time' : The time taken for the entire chain of density to be created
5. 'vanilla_35.csv': This contains the summaries generated by a vanilla GPT 3.5 model that was prompted to generate an entity dense summary.
6. 'URL' : These are the summaries generated by the individual fine-tuned models. We include the following fields
- 'Article': The original article that was summarized
- 'model': This is either 'fine-tuned-20', 'fine-tuned-50' or 'fine-tuned-76' which represent our GPT3.5 model that was fine tuned on 20, 50 or 76 examples respectively
- 'Summary': The summary generated by the fine-tuned model
- 'Time' : This was the amount of time it took for the summary to be generated
| [
"# Introduction\n\nThe following is a dataset which consists of some chain of density summaries which we generated using GPT-4. The approach is slightly modified to account for GPT-4 timeouts with some additional validation that we added using the Instructor Library.\n\nWe wrote a short blog about how we generated this date here\n\nHere's a quick summary of the individual files that we have\n\n\n1. 'summarization_20', 'summarization_50' and 'summarization_all' are the respective '.jsonl' files that we used to fine-tune our models. They contain 20, 50 and 76 examples respectively.\n\n2. 'URL': This is a randomly selected group of 100 test articles that were sampled from the original 'griffin/chain-of-density' article that was used to create our training sets with GPT-4.\n\n3. 'URL': This is a randomly selected group of 20 test articles that were sampled from the original 'griffin/chain-of-density' article which were not provided to our fine-tuned models. These were then used to evaluate their quality and performance.\n\n4, 'validation-summaries': These are the summaries generated by 'GPT-4' on the test set. We include the following fields:\n - 'text': The original article that was summarized\n\t - 'model': This has a single value of 'GPT-4'\n\t - 'Summary 1' : The first summary created\n\t - 'Summary 2' : The second rewritten summary\n\t - 'Summary 3' : The third rewritten summary\n\t - 'Summary 4' : The fourth rewritten summary\n\t - 'time' : The time taken for the entire chain of density to be created\n\n5. 'vanilla_35.csv': This contains the summaries generated by a vanilla GPT 3.5 model that was prompted to generate an entity dense summary.\n\n6. 'URL' : These are the summaries generated by the individual fine-tuned models. We include the following fields\n\t- 'Article': The original article that was summarized\n\t- 'model': This is either 'fine-tuned-20', 'fine-tuned-50' or 'fine-tuned-76' which represent our GPT3.5 model that was fine tuned on 20, 50 or 76 examples respectively\n\t- 'Summary': The summary generated by the fine-tuned model\n\t- 'Time' : This was the amount of time it took for the summary to be generated"
]
| [
"TAGS\n#task_categories-summarization #language-English #license-mit #region-us \n",
"# Introduction\n\nThe following is a dataset which consists of some chain of density summaries which we generated using GPT-4. The approach is slightly modified to account for GPT-4 timeouts with some additional validation that we added using the Instructor Library.\n\nWe wrote a short blog about how we generated this date here\n\nHere's a quick summary of the individual files that we have\n\n\n1. 'summarization_20', 'summarization_50' and 'summarization_all' are the respective '.jsonl' files that we used to fine-tune our models. They contain 20, 50 and 76 examples respectively.\n\n2. 'URL': This is a randomly selected group of 100 test articles that were sampled from the original 'griffin/chain-of-density' article that was used to create our training sets with GPT-4.\n\n3. 'URL': This is a randomly selected group of 20 test articles that were sampled from the original 'griffin/chain-of-density' article which were not provided to our fine-tuned models. These were then used to evaluate their quality and performance.\n\n4, 'validation-summaries': These are the summaries generated by 'GPT-4' on the test set. We include the following fields:\n - 'text': The original article that was summarized\n\t - 'model': This has a single value of 'GPT-4'\n\t - 'Summary 1' : The first summary created\n\t - 'Summary 2' : The second rewritten summary\n\t - 'Summary 3' : The third rewritten summary\n\t - 'Summary 4' : The fourth rewritten summary\n\t - 'time' : The time taken for the entire chain of density to be created\n\n5. 'vanilla_35.csv': This contains the summaries generated by a vanilla GPT 3.5 model that was prompted to generate an entity dense summary.\n\n6. 'URL' : These are the summaries generated by the individual fine-tuned models. We include the following fields\n\t- 'Article': The original article that was summarized\n\t- 'model': This is either 'fine-tuned-20', 'fine-tuned-50' or 'fine-tuned-76' which represent our GPT3.5 model that was fine tuned on 20, 50 or 76 examples respectively\n\t- 'Summary': The summary generated by the fine-tuned model\n\t- 'Time' : This was the amount of time it took for the summary to be generated"
]
| [
25,
561
]
| [
"passage: TAGS\n#task_categories-summarization #language-English #license-mit #region-us \n"
]
|
d09498f234e2d49793cd2320845789136b5f564a | # Dataset Card for "mental_health_chatbot_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dialbird/mental_health_chatbot_dataset | [
"region:us"
]
| 2023-11-08T13:14:24+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 189421, "num_examples": 172}], "download_size": 102271, "dataset_size": 189421}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-08T13:14:28+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mental_health_chatbot_dataset"
More Information needed | [
"# Dataset Card for \"mental_health_chatbot_dataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mental_health_chatbot_dataset\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mental_health_chatbot_dataset\"\n\nMore Information needed"
]
|
e023b519d79a229783c55805c454b1e7afacde5a | # Dataset Card for "fingpt_chatglm2_sentiment_instruction_lora_ft_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Cesar7980/fingpt_chatglm2_sentiment_instruction_lora_ft_dataset | [
"region:us"
]
| 2023-11-08T13:16:52+00:00 | {"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18540941.869938433, "num_examples": 76772}], "download_size": 6417302, "dataset_size": 18540941.869938433}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-08T13:17:02+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "fingpt_chatglm2_sentiment_instruction_lora_ft_dataset"
More Information needed | [
"# Dataset Card for \"fingpt_chatglm2_sentiment_instruction_lora_ft_dataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"fingpt_chatglm2_sentiment_instruction_lora_ft_dataset\"\n\nMore Information needed"
]
| [
6,
31
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"fingpt_chatglm2_sentiment_instruction_lora_ft_dataset\"\n\nMore Information needed"
]
|
f3cc5b661584b16df0f0dcbf32d362ea32a72446 | # Dataset Card for "fm-updates-alpaca-7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | coastalcph/fm-updates-alpaca-7b | [
"region:us"
]
| 2023-11-08T13:29:51+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "query", "struct": [{"name": "label", "dtype": "string"}, {"name": "objects", "list": [{"name": "aliases", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "qid", "dtype": "string"}]}, {"name": "qid", "dtype": "string"}, {"name": "rel_id", "dtype": "string"}, {"name": "relation", "dtype": "string"}]}, {"name": "prediction", "struct": [{"name": "predictions", "list": [{"name": "answer", "dtype": "string"}, {"name": "first_token_probability", "dtype": "float64"}, {"name": "per_token_probability", "sequence": "float64"}, {"name": "perplexity", "dtype": "float64"}]}, {"name": "query", "dtype": "string"}]}, {"name": "f1", "dtype": "float64"}, {"name": "relation", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "original_answer", "dtype": "string"}, {"name": "updates", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 427781.57826343767, "num_examples": 1098}, {"name": "validation", "num_bytes": 46752.08505611341, "num_examples": 120}], "download_size": 301820, "dataset_size": 474533.6633195511}} | 2023-12-06T09:11:02+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "fm-updates-alpaca-7b"
More Information needed | [
"# Dataset Card for \"fm-updates-alpaca-7b\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"fm-updates-alpaca-7b\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"fm-updates-alpaca-7b\"\n\nMore Information needed"
]
|
3994e76d24951a7da660509b4f86098f17cd26f3 |
AHN dataset for monocular height estimation
| XShadow/AHN_Height_Estimation | [
"license:cc-by-4.0",
"region:us"
]
| 2023-11-08T13:33:42+00:00 | {"license": "cc-by-4.0"} | 2023-11-08T15:48:52+00:00 | []
| []
| TAGS
#license-cc-by-4.0 #region-us
|
AHN dataset for monocular height estimation
| []
| [
"TAGS\n#license-cc-by-4.0 #region-us \n"
]
| [
15
]
| [
"passage: TAGS\n#license-cc-by-4.0 #region-us \n"
]
|
9a5edb122bc0f6122a903f71fcd17155283bef4b | # Dataset Card for "bw_spec_cls_80_07"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_07 | [
"region:us"
]
| 2023-11-08T13:33:47+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "16158", "1": "16162", "2": "16163", "3": "16334", "4": "16354", "5": "16743", "6": "16744", "7": "16745", "8": "16747", "9": "16819", "10": "16820", "11": "16821", "12": "16822", "13": "16878", "14": "16879", "15": "16880", "16": "17132", "17": "17462", "18": "17491", "19": "17496", "20": "17499", "21": "17500", "22": "17573", "23": "17588", "24": "17605", "25": "17606", "26": "17607", "27": "17608", "28": "17609", "29": "17610", "30": "17611", "31": "17631", "32": "17632", "33": "17633", "34": "17634", "35": "17635", "36": "17636", "37": "17637", "38": "17644", "39": "17735", "40": "17782", "41": "17884", "42": "17906", "43": "18031", "44": "18032", "45": "18033", "46": "18034", "47": "18043", "48": "18044", "49": "18124", "50": "18144", "51": "18145", "52": "18146", "53": "18159", "54": "18197", "55": "18607", "56": "18611", "57": "18876", "58": "18877", "59": "18887", "60": "19073", "61": "19074", "62": "19179", "63": "19184", "64": "19187", "65": "19192", "66": "19412", "67": "19413", "68": "19415", "69": "19416", "70": "19417", "71": "19418", "72": "19420", "73": "19422", "74": "19423", "75": "19425", "76": "19438", "77": "19441", "78": "19442", "79": "19459"}}}}], "splits": [{"name": "train", "num_bytes": 90744057.6, "num_examples": 1600}], "download_size": 89863005, "dataset_size": 90744057.6}} | 2023-11-08T13:33:51+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_07"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_07\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_07\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_07\"\n\nMore Information needed"
]
|
ecad70b25e36d64a5fdcfe7a152d0295e0b67567 | Causal pairs generated with chatGPT. Training set. | gh1407/synthetic_causal_pairs | [
"license:mit",
"region:us"
]
| 2023-11-08T13:37:47+00:00 | {"license": "mit", "dataset_info": {"features": [{"name": "Unnamed: 0.1", "dtype": "int64"}, {"name": "Unnamed: 0", "dtype": "int64"}, {"name": "outlet", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "political_leaning", "dtype": "string"}, {"name": "gpt_causal_graph", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 923574, "num_examples": 200}], "download_size": 0, "dataset_size": 923574}} | 2023-11-08T14:23:26+00:00 | []
| []
| TAGS
#license-mit #region-us
| Causal pairs generated with chatGPT. Training set. | []
| [
"TAGS\n#license-mit #region-us \n"
]
| [
11
]
| [
"passage: TAGS\n#license-mit #region-us \n"
]
|
eb0e95d7259ab83fcad9fabed55cd59b444d85ca | # Dataset Card for "Alcora2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | GHOFRANEE/Alcora2 | [
"region:us"
]
| 2023-11-08T13:39:29+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 260321, "num_examples": 102}], "download_size": 20852, "dataset_size": 260321}} | 2023-11-08T14:17:12+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "Alcora2"
More Information needed | [
"# Dataset Card for \"Alcora2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"Alcora2\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"Alcora2\"\n\nMore Information needed"
]
|
64d0e2c16018de7e9a593a0a71d4b6b66fa5f803 | # Dataset Card for "data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | aminlouhichi/data | [
"region:us"
]
| 2023-11-08T13:40:39+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32311493.0, "num_examples": 142}, {"name": "validation", "num_bytes": 13269660.0, "num_examples": 59}, {"name": "test", "num_bytes": 13666341.0, "num_examples": 59}], "download_size": 56280635, "dataset_size": 59247494.0}} | 2023-11-08T14:13:55+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "data"
More Information needed | [
"# Dataset Card for \"data\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"data\"\n\nMore Information needed"
]
| [
6,
11
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"data\"\n\nMore Information needed"
]
|
1de5d48d603dbbe5b80d3baf621c1bea36eeace8 | # Dataset Card for "bw_spec_cls_80_08"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_08 | [
"region:us"
]
| 2023-11-08T13:54:08+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "19707", "1": "19708", "2": "19729", "3": "19758", "4": "19759", "5": "19760", "6": "19889", "7": "19890", "8": "19891", "9": "20050", "10": "20361", "11": "20362", "12": "20364", "13": "20365", "14": "20366", "15": "20369", "16": "20372", "17": "20373", "18": "20374", "19": "20375", "20": "20376", "21": "20424", "22": "20432", "23": "20469", "24": "20704", "25": "21058", "26": "21085", "27": "21087", "28": "21167", "29": "21228", "30": "21231", "31": "21232", "32": "21400", "33": "21401", "34": "21402", "35": "21403", "36": "21404", "37": "21409", "38": "21422", "39": "21657", "40": "21672", "41": "21676", "42": "21677", "43": "21707", "44": "21774", "45": "21842", "46": "21859", "47": "21860", "48": "21891", "49": "21895", "50": "22088", "51": "22091", "52": "22093", "53": "22094", "54": "22095", "55": "22097", "56": "22295", "57": "22348", "58": "22472", "59": "22473", "60": "22474", "61": "22475", "62": "22476", "63": "22477", "64": "22478", "65": "22479", "66": "22480", "67": "22481", "68": "23013", "69": "23015", "70": "23016", "71": "23037", "72": "23039", "73": "23155", "74": "23156", "75": "23172", "76": "23353", "77": "23371", "78": "23372", "79": "23505"}}}}], "splits": [{"name": "train", "num_bytes": 87970180.8, "num_examples": 1600}], "download_size": 88350209, "dataset_size": 87970180.8}} | 2023-11-08T13:54:13+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_08"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_08\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_08\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_08\"\n\nMore Information needed"
]
|
a7ed7974ed584731f351e80f9ad941412894f046 | # Dataset Card for "CUAD"
https://arxiv.org/pdf/2103.06268.pdf | Nadav-Timor/CUAD | [
"arxiv:2103.06268",
"region:us"
]
| 2023-11-08T14:09:19+00:00 | {"paperswithcode_id": "cuad", "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question_id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer_text", "dtype": "string"}, {"name": "answer_start", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1142083198, "num_examples": 13823}], "download_size": 14209324, "dataset_size": 1142083198}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-08T14:14:58+00:00 | [
"2103.06268"
]
| []
| TAGS
#arxiv-2103.06268 #region-us
| # Dataset Card for "CUAD"
URL | [
"# Dataset Card for \"CUAD\"\n\nURL"
]
| [
"TAGS\n#arxiv-2103.06268 #region-us \n",
"# Dataset Card for \"CUAD\"\n\nURL"
]
| [
15,
10
]
| [
"passage: TAGS\n#arxiv-2103.06268 #region-us \n# Dataset Card for \"CUAD\"\n\nURL"
]
|
e5354970102dd07f1727cb6cc7b572af066199da | # Small PubLayNet (WebDataset)
This dataset consists in the first WebDataset shards of PubLayNet from http://storage.googleapis.com/nvdata-publaynet
It is mostly used to test the WebDataset integration within the Hugging Face ecosystem. | lhoestq/small-publaynet-wds | [
"webdataset",
"region:us"
]
| 2023-11-08T14:09:48+00:00 | {"tags": ["webdataset"]} | 2023-11-08T14:23:49+00:00 | []
| []
| TAGS
#webdataset #region-us
| # Small PubLayNet (WebDataset)
This dataset consists in the first WebDataset shards of PubLayNet from URL
It is mostly used to test the WebDataset integration within the Hugging Face ecosystem. | [
"# Small PubLayNet (WebDataset)\n\nThis dataset consists in the first WebDataset shards of PubLayNet from URL\n\nIt is mostly used to test the WebDataset integration within the Hugging Face ecosystem."
]
| [
"TAGS\n#webdataset #region-us \n",
"# Small PubLayNet (WebDataset)\n\nThis dataset consists in the first WebDataset shards of PubLayNet from URL\n\nIt is mostly used to test the WebDataset integration within the Hugging Face ecosystem."
]
| [
10,
50
]
| [
"passage: TAGS\n#webdataset #region-us \n# Small PubLayNet (WebDataset)\n\nThis dataset consists in the first WebDataset shards of PubLayNet from URL\n\nIt is mostly used to test the WebDataset integration within the Hugging Face ecosystem."
]
|
37127b41e459afbe11b10bbe725a219eb30547d3 | # Dataset Card for "bw_spec_cls_80_09"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_09 | [
"region:us"
]
| 2023-11-08T14:14:40+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "24216", "1": "24217", "2": "24218", "3": "24362", "4": "24363", "5": "24364", "6": "24365", "7": "24366", "8": "24367", "9": "24368", "10": "24369", "11": "24370", "12": "24371", "13": "24418", "14": "24420", "15": "24421", "16": "24422", "17": "24423", "18": "24424", "19": "24425", "20": "24426", "21": "24427", "22": "24428", "23": "24429", "24": "24430", "25": "24431", "26": "24432", "27": "24512", "28": "24515", "29": "24521", "30": "24524", "31": "24698", "32": "24699", "33": "24700", "34": "24701", "35": "24702", "36": "24717", "37": "24720", "38": "24739", "39": "24741", "40": "24742", "41": "24745", "42": "24746", "43": "24747", "44": "24748", "45": "24749", "46": "24842", "47": "24898", "48": "24899", "49": "24901", "50": "24912", "51": "24915", "52": "24917", "53": "24963", "54": "24975", "55": "24983", "56": "25063", "57": "25066", "58": "25104", "59": "25124", "60": "25215", "61": "25216", "62": "25227", "63": "25232", "64": "25233", "65": "25234", "66": "25235", "67": "25324", "68": "25378", "69": "25601", "70": "25603", "71": "25605", "72": "25606", "73": "25608", "74": "25609", "75": "25795", "76": "25796", "77": "25797", "78": "25802", "79": "25804"}}}}], "splits": [{"name": "train", "num_bytes": 87063169.6, "num_examples": 1600}], "download_size": 86900268, "dataset_size": 87063169.6}} | 2023-11-08T14:14:44+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_09"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_09\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_09\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_09\"\n\nMore Information needed"
]
|
f7b9fdeb72d3031170fab122984ae8a14c1ce071 |
# Dataset Card for UltraFeedback Binarized
## Dataset Description
This is a pre-processed version of the [UltraFeedback dataset](https://huggingface.co/datasets/openbmb/UltraFeedback) and was used to train [Zephyr-7Β-β](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), a state of the art chat model at the 7B parameter scale.
The original UltraFeedback dataset consists of 64k prompts, where is prompt is accompanied with four model completions from a wide variety of open and proprietary models. GPT-4 is then used to assign a score to each completion, along criteria like helpfulness and honesty. To create `UltraFeedback Binarized`, we picked the highest mean score as the "chosen" completion, and one of the remaining 3 at random as the "rejected" one. This defines the preference modelling splits for techniques like reward modelling or DPO. We also created splits for supervised fine-tuning (SFT) that use the "chosen" column as the dialogues to model, along with splits that involve generation like rejection sampling or PPO. For details on the dataset processing, see the accompanying [script](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized/blob/main/create_dataset.py).
## Dataset Structure
### Data Splits
The dataset has six splits, suitable for:
* Supervised fine-tuning (`sft`).
* Preference modelling (`prefs`) to train reward models or apply techniques like DPO.
* Generation ranking (`gen`) via techniques like rejection sampling or PPO.
The number of examples per split is shown as follows:
| train_sft | test_sft | train_prefs | test_prefs | train_gen | test_gen |
|:-------:|:-----------:|:-----:| :-----:| :-----:| :-----:|
| 61966 | 1000 | 61966 | 2000 | 61966 | 1000 |
The dataset is stored in parquet format with each entry using the following schema:
```
{
"prompt": "Which books are still banned in Canada?",
"chosen": [
{
"content": "Which books are still banned in Canada?",
"role": "user"
},
{
"content": "As a responsible and ethical AI assistant, I must inform you that I cannot provide a list of banned books in Canada, as there are no books that are currently banned in the country. Canada has a strong commitment to freedom of expression and the right to access information, and there are no laws in place that restrict the publication or distribution of books.\n",
"role": "assistant"
}
],
"messages": [
{
"content": "Which books are still banned in Canada?",
"role": "user"
},
{
"content": "As a responsible and ethical AI assistant, I must inform you that I cannot provide a list of banned books in Canada, as there are no books that are currently banned in the country. Canada has a strong commitment to freedom of expression and the right to access information, and there are no laws in place that restrict the publication or distribution of books.\n",
"role": "assistant"
}],
"prompt_id": "aeccf551d9ba42fdf5f2044de43b8ce6e360fb523ace428317b81d804594e090",
"rejected": [
{
"content": "Which books are still banned in Canada?",
"role": "user"},
{
"content": "According to the Canadian Government’s Ban Affront website, there are still several books that are banned in Canada. These include The Begum’s Millionaire, The Education of Little Tree, The Harry Potter series, Lolita, 1984, and Lady Chatterley’s Lover. Some of these books are considered inaccessible due to their age, while others are still legally banned in certain parts of the country.",
"role": "assistant"
}
],
"score_chosen": 8.0,
"score_rejected": 5.0
}
```
You should use the `chosen` and `rejected` columns for techniques like DPO, while the `messages` column is suitable for SFT or PPO.
## Citation
If you find this dataset is useful in your work, please cite the original UltraFeedback dataset: https://huggingface.co/datasets/openbmb/UltraFeedback
You may also wish to cite the Zephyr 7B technical report:
```
@misc{tunstall2023zephyr,
title={Zephyr: Direct Distillation of LM Alignment},
author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf},
year={2023},
eprint={2310.16944},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` | zhengr/ultrafeedback_binarized | [
"task_categories:conversational",
"task_categories:text-generation",
"language:en",
"license:mit",
"arxiv:2310.16944",
"region:us"
]
| 2023-11-08T14:18:08+00:00 | {"language": ["en"], "license": "mit", "task_categories": ["conversational", "text-generation"], "pretty_name": "UltraFeedback Binarized", "configs": [{"config_name": "default", "data_files": [{"split": "train_sft", "path": "data/train_sft-*"}, {"split": "test_sft", "path": "data/test_sft-*"}, {"split": "train_gen", "path": "data/train_gen-*"}, {"split": "test_gen", "path": "data/test_gen-*"}, {"split": "train_prefs", "path": "data/train_prefs-*"}, {"split": "test_prefs", "path": "data/test_prefs-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "score_chosen", "dtype": "float64"}, {"name": "score_rejected", "dtype": "float64"}], "splits": [{"name": "train_sft", "num_bytes": 397273717, "num_examples": 61966}, {"name": "test_sft", "num_bytes": 6270496, "num_examples": 1000}, {"name": "train_gen", "num_bytes": 316634390, "num_examples": 61966}, {"name": "test_gen", "num_bytes": 5008220, "num_examples": 1000}, {"name": "train_prefs", "num_bytes": 397273717, "num_examples": 61966}, {"name": "test_prefs", "num_bytes": 12782225, "num_examples": 2000}], "download_size": 636467735, "dataset_size": 1135242765}} | 2023-11-08T14:18:27+00:00 | [
"2310.16944"
]
| [
"en"
]
| TAGS
#task_categories-conversational #task_categories-text-generation #language-English #license-mit #arxiv-2310.16944 #region-us
| Dataset Card for UltraFeedback Binarized
========================================
Dataset Description
-------------------
This is a pre-processed version of the UltraFeedback dataset and was used to train Zephyr-7Β-β, a state of the art chat model at the 7B parameter scale.
The original UltraFeedback dataset consists of 64k prompts, where is prompt is accompanied with four model completions from a wide variety of open and proprietary models. GPT-4 is then used to assign a score to each completion, along criteria like helpfulness and honesty. To create 'UltraFeedback Binarized', we picked the highest mean score as the "chosen" completion, and one of the remaining 3 at random as the "rejected" one. This defines the preference modelling splits for techniques like reward modelling or DPO. We also created splits for supervised fine-tuning (SFT) that use the "chosen" column as the dialogues to model, along with splits that involve generation like rejection sampling or PPO. For details on the dataset processing, see the accompanying script.
Dataset Structure
-----------------
### Data Splits
The dataset has six splits, suitable for:
* Supervised fine-tuning ('sft').
* Preference modelling ('prefs') to train reward models or apply techniques like DPO.
* Generation ranking ('gen') via techniques like rejection sampling or PPO.
The number of examples per split is shown as follows:
The dataset is stored in parquet format with each entry using the following schema:
You should use the 'chosen' and 'rejected' columns for techniques like DPO, while the 'messages' column is suitable for SFT or PPO.
If you find this dataset is useful in your work, please cite the original UltraFeedback dataset: URL
You may also wish to cite the Zephyr 7B technical report:
| [
"### Data Splits\n\n\nThe dataset has six splits, suitable for:\n\n\n* Supervised fine-tuning ('sft').\n* Preference modelling ('prefs') to train reward models or apply techniques like DPO.\n* Generation ranking ('gen') via techniques like rejection sampling or PPO.\n\n\nThe number of examples per split is shown as follows:\n\n\n\nThe dataset is stored in parquet format with each entry using the following schema:\n\n\nYou should use the 'chosen' and 'rejected' columns for techniques like DPO, while the 'messages' column is suitable for SFT or PPO.\n\n\nIf you find this dataset is useful in your work, please cite the original UltraFeedback dataset: URL\n\n\nYou may also wish to cite the Zephyr 7B technical report:"
]
| [
"TAGS\n#task_categories-conversational #task_categories-text-generation #language-English #license-mit #arxiv-2310.16944 #region-us \n",
"### Data Splits\n\n\nThe dataset has six splits, suitable for:\n\n\n* Supervised fine-tuning ('sft').\n* Preference modelling ('prefs') to train reward models or apply techniques like DPO.\n* Generation ranking ('gen') via techniques like rejection sampling or PPO.\n\n\nThe number of examples per split is shown as follows:\n\n\n\nThe dataset is stored in parquet format with each entry using the following schema:\n\n\nYou should use the 'chosen' and 'rejected' columns for techniques like DPO, while the 'messages' column is suitable for SFT or PPO.\n\n\nIf you find this dataset is useful in your work, please cite the original UltraFeedback dataset: URL\n\n\nYou may also wish to cite the Zephyr 7B technical report:"
]
| [
45,
183
]
| [
"passage: TAGS\n#task_categories-conversational #task_categories-text-generation #language-English #license-mit #arxiv-2310.16944 #region-us \n### Data Splits\n\n\nThe dataset has six splits, suitable for:\n\n\n* Supervised fine-tuning ('sft').\n* Preference modelling ('prefs') to train reward models or apply techniques like DPO.\n* Generation ranking ('gen') via techniques like rejection sampling or PPO.\n\n\nThe number of examples per split is shown as follows:\n\n\n\nThe dataset is stored in parquet format with each entry using the following schema:\n\n\nYou should use the 'chosen' and 'rejected' columns for techniques like DPO, while the 'messages' column is suitable for SFT or PPO.\n\n\nIf you find this dataset is useful in your work, please cite the original UltraFeedback dataset: URL\n\n\nYou may also wish to cite the Zephyr 7B technical report:"
]
|
3727f36f14e655ebee106c1b154d810ae8224b08 | # Dataset Card for "trial"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | srishtikumbhare/trial | [
"region:us"
]
| 2023-11-08T14:25:01+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 264370838.0, "num_examples": 50}], "download_size": 263685192, "dataset_size": 264370838.0}} | 2023-11-09T07:08:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "trial"
More Information needed | [
"# Dataset Card for \"trial\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"trial\"\n\nMore Information needed"
]
| [
6,
12
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"trial\"\n\nMore Information needed"
]
|
e3fd14df9d9b6c6245934deb3f24d77fdda6e9e1 |
# Dataset Card for UltraChat 200k
## Dataset Description
This is a heavily filtered version of the [UltraChat](https://github.com/thunlp/UltraChat) dataset and was used to train [Zephyr-7B-β](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), a state of the art 7b chat model.
The original datasets consists of 1.4M dialogues generated by ChatGPT and spanning a wide range of topics. To create `UltraChat 200k`, we applied the following logic:
- Selection of a subset of data for faster supervised fine tuning.
- Truecasing of the dataset, as we observed around 5% of the data contained grammatical errors like "Hello. how are you?" instead of "Hello. How are you?"
- Removal of dialogues where the assistant replies with phrases like "I do not have emotions" or "I don't have opinions", even for fact-based prompts that don't involve either.
## Dataset Structure
The dataset has four splits, suitable for:
* Supervised fine-tuning (`sft`).
* Generation ranking (`gen`) via techniques like rejection sampling or PPO.
The number of examples per split is shown as follows:
| train_sft | test_sft | train_gen | test_gen |
|:-------:|:-----------:|:-----:| :-----:|
| 207865 | 23110 | 256032 | 28304 |
The dataset is stored in parquet format with each entry using the following schema:
```
{
"prompt": "Create a fully-developed protagonist who is challenged to survive within a dystopian society under the rule of a tyrant. ...",
"messages":[
{
"content": "Create a fully-developed protagonist who is challenged to survive within a dystopian society under the rule of a tyrant. ...",
"role": "user"
},
{
"content": "Name: Ava\n\n Ava was just 16 years old when the world as she knew it came crashing down. The government had collapsed, leaving behind a chaotic and lawless society. ...",
"role": "assistant"
},
{
"content": "Wow, Ava's story is so intense and inspiring! Can you provide me with more details. ...",
"role": "user"
},
{
"content": "Certainly! ....",
"role": "assistant"
},
{
"content": "That's really interesting! I would love to hear more...",
"role": "user"
}
{
"content": "Certainly! ....",
"role": "assistant"
},
],
"prompt_id": "d938b65dfe31f05f80eb8572964c6673eddbd68eff3db6bd234d7f1e3b86c2af"
}
```
## Citation
If you find this dataset is useful in your work, please cite the original UltraChat dataset:
```
@misc{ding2023enhancing,
title={Enhancing Chat Language Models by Scaling High-quality Instructional Conversations},
author={Ning Ding and Yulin Chen and Bokai Xu and Yujia Qin and Zhi Zheng and Shengding Hu and Zhiyuan Liu and Maosong Sun and Bowen Zhou},
year={2023},
eprint={2305.14233},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
You may also wish to cite the Zephyr 7B technical report:
```
@misc{tunstall2023zephyr,
title={Zephyr: Direct Distillation of LM Alignment},
author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf},
year={2023},
eprint={2310.16944},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` | zhengr/ultrachat_200k | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"arxiv:2305.14233",
"arxiv:2310.16944",
"region:us"
]
| 2023-11-08T14:46:06+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["conversational", "text-generation"], "pretty_name": "UltraChat 200k", "configs": [{"config_name": "default", "data_files": [{"split": "train_sft", "path": "data/train_sft-*"}, {"split": "test_sft", "path": "data/test_sft-*"}, {"split": "train_gen", "path": "data/train_gen-*"}, {"split": "test_gen", "path": "data/test_gen-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train_sft", "num_bytes": 1397058554, "num_examples": 207865}, {"name": "test_sft", "num_bytes": 154695659, "num_examples": 23110}, {"name": "train_gen", "num_bytes": 1347396812, "num_examples": 256032}, {"name": "test_gen", "num_bytes": 148276089, "num_examples": 28304}], "download_size": 1624049723, "dataset_size": 3047427114}} | 2023-11-08T14:47:02+00:00 | [
"2305.14233",
"2310.16944"
]
| [
"en"
]
| TAGS
#task_categories-conversational #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-mit #arxiv-2305.14233 #arxiv-2310.16944 #region-us
| Dataset Card for UltraChat 200k
===============================
Dataset Description
-------------------
This is a heavily filtered version of the UltraChat dataset and was used to train Zephyr-7B-β, a state of the art 7b chat model.
The original datasets consists of 1.4M dialogues generated by ChatGPT and spanning a wide range of topics. To create 'UltraChat 200k', we applied the following logic:
* Selection of a subset of data for faster supervised fine tuning.
* Truecasing of the dataset, as we observed around 5% of the data contained grammatical errors like "Hello. how are you?" instead of "Hello. How are you?"
* Removal of dialogues where the assistant replies with phrases like "I do not have emotions" or "I don't have opinions", even for fact-based prompts that don't involve either.
Dataset Structure
-----------------
The dataset has four splits, suitable for:
* Supervised fine-tuning ('sft').
* Generation ranking ('gen') via techniques like rejection sampling or PPO.
The number of examples per split is shown as follows:
The dataset is stored in parquet format with each entry using the following schema:
If you find this dataset is useful in your work, please cite the original UltraChat dataset:
You may also wish to cite the Zephyr 7B technical report:
| []
| [
"TAGS\n#task_categories-conversational #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-mit #arxiv-2305.14233 #arxiv-2310.16944 #region-us \n"
]
| [
66
]
| [
"passage: TAGS\n#task_categories-conversational #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-mit #arxiv-2305.14233 #arxiv-2310.16944 #region-us \n"
]
|
94424b361c7ee78f52109b64cce4f7ce9ee450d9 | # Dataset Card for "bw_spec_cls_80_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_10 | [
"region:us"
]
| 2023-11-08T15:06:32+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "26007", "1": "26008", "2": "26010", "3": "26011", "4": "26012", "5": "26013", "6": "26014", "7": "26016", "8": "26017", "9": "26020", "10": "26021", "11": "26022", "12": "26025", "13": "26026", "14": "26034", "15": "26035", "16": "26036", "17": "26169", "18": "26174", "19": "26298", "20": "26301", "21": "26302", "22": "26307", "23": "26322", "24": "26464", "25": "26465", "26": "26466", "27": "26583", "28": "26600", "29": "26605", "30": "26629", "31": "26638", "32": "26639", "33": "26640", "34": "26641", "35": "26642", "36": "26643", "37": "26651", "38": "26652", "39": "26653", "40": "26654", "41": "26655", "42": "26656", "43": "26657", "44": "26658", "45": "26659", "46": "26674", "47": "26681", "48": "26754", "49": "26765", "50": "27164", "51": "27177", "52": "27194", "53": "27195", "54": "27197", "55": "27198", "56": "27258", "57": "27406", "58": "27454", "59": "27455", "60": "27456", "61": "27547", "62": "27548", "63": "27549", "64": "27550", "65": "27551", "66": "27552", "67": "27609", "68": "27610", "69": "27611", "70": "27612", "71": "27613", "72": "27667", "73": "27673", "74": "27797", "75": "27799", "76": "27802", "77": "27803", "78": "27804", "79": "27805"}}}}], "splits": [{"name": "train", "num_bytes": 86008633.6, "num_examples": 1600}], "download_size": 87234967, "dataset_size": 86008633.6}} | 2023-11-08T15:06:40+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_10"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_10\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_10\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_10\"\n\nMore Information needed"
]
|
ace6fe2e1ae751a363ef4b5f85983379e55f2a9a | # Dataset Card for "code-tutorials-en"
- `en` only
- 100 words or more
- reading ease of 50 or more
```
DatasetDict({
train: Dataset({
features: ['text', 'url', 'dump', 'source', 'word_count', 'flesch_reading_ease'],
num_rows: 223162
})
validation: Dataset({
features: ['text', 'url', 'dump', 'source', 'word_count', 'flesch_reading_ease'],
num_rows: 5873
})
test: Dataset({
features: ['text', 'url', 'dump', 'source', 'word_count', 'flesch_reading_ease'],
num_rows: 5873
})
})
``` | BEE-spoke-data/code-tutorials-en | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"source_datasets:mponty/code_tutorials",
"language:en",
"license:odc-by",
"region:us"
]
| 2023-11-08T15:09:01+00:00 | {"language": ["en"], "license": "odc-by", "size_categories": ["100K<n<1M"], "source_datasets": "mponty/code_tutorials", "task_categories": ["text-generation"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}, {"config_name": "unfiltered", "data_files": [{"split": "train", "path": "unfiltered/train-*"}]}], "dataset_info": [{"config_name": "default", "features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "dump", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "word_count", "dtype": "int64"}, {"name": "flesch_reading_ease", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 2003343392.8658142, "num_examples": 223162}, {"name": "validation", "num_bytes": 52722397.8378977, "num_examples": 5873}, {"name": "test", "num_bytes": 52722397.8378977, "num_examples": 5873}], "download_size": 1137457027, "dataset_size": 2108788188.5416098}, {"config_name": "unfiltered", "features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "dump", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "word_count", "dtype": "int64"}, {"name": "flesch_reading_ease", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 3452998372, "num_examples": 384646}], "download_size": 1859375824, "dataset_size": 3452998372}]} | 2023-11-28T23:53:28+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-generation #size_categories-100K<n<1M #source_datasets-mponty/code_tutorials #language-English #license-odc-by #region-us
| # Dataset Card for "code-tutorials-en"
- 'en' only
- 100 words or more
- reading ease of 50 or more
| [
"# Dataset Card for \"code-tutorials-en\"\n\n- 'en' only\n- 100 words or more\n- reading ease of 50 or more"
]
| [
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #source_datasets-mponty/code_tutorials #language-English #license-odc-by #region-us \n",
"# Dataset Card for \"code-tutorials-en\"\n\n- 'en' only\n- 100 words or more\n- reading ease of 50 or more"
]
| [
56,
32
]
| [
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #source_datasets-mponty/code_tutorials #language-English #license-odc-by #region-us \n# Dataset Card for \"code-tutorials-en\"\n\n- 'en' only\n- 100 words or more\n- reading ease of 50 or more"
]
|
c6c45cbc608018c8593e8ebc8b61ba0ee4d892b6 |
# Dataset
Materials project (2019 dump)
This dataset contains 133420 materials with formation energy per atom.
Processed from [mp.2019.04.01.json](https://figshare.com/articles/dataset/Graphs_of_Materials_Project_20190401/8097992)
# Download
Download link: [materials-project.tar.gz](https://huggingface.co/datasets/materials-toolkits/materials-project/raw/main/materials-project.tar.gz)
MD5 checksum `c132f3781f32cd17f3a92aa6501b9531`
# Content
Bundled in `materials-project.tar.gz`.
## Index (`index.json`)
list of dict:
* `index` (int) => index of the structure in data file.
* `id` (str) => id of Materials Project.
* `formula` (str) => formula.
* `natoms` (int) => number of atoms.
* `energy_pa` (float) => formation energy per atom.
## Data (`data.hdf5`)
fields:
* `structures` => a group containing structure information.
* `structures/cell` (float32) => lattice of the material.
* `structures/natoms` (int32) => number of atoms.
* `structures/energy_pa` (float32) => formation energy per atom.
* `structures/atoms_ptr` (int64) => position of the first atom of the structures in the `atoms` group.
* `atoms` => a group containing information about atoms.
* `atoms/positions` (float32) => the positions of the atoms.
* `atoms/atomic_number` (uint8) => the atomic number of the atoms.
| materials-toolkits/materials-project | [
"size_categories:100K<n<1M",
"license:mit",
"chemistry",
"region:us"
]
| 2023-11-08T15:12:18+00:00 | {"license": "mit", "size_categories": ["100K<n<1M"], "pretty_name": "Materials Project", "tags": ["chemistry"]} | 2024-02-07T11:26:10+00:00 | []
| []
| TAGS
#size_categories-100K<n<1M #license-mit #chemistry #region-us
|
# Dataset
Materials project (2019 dump)
This dataset contains 133420 materials with formation energy per atom.
Processed from URL
# Download
Download link: URL
MD5 checksum 'c132f3781f32cd17f3a92aa6501b9531'
# Content
Bundled in 'URL'.
## Index ('URL')
list of dict:
* 'index' (int) => index of the structure in data file.
* 'id' (str) => id of Materials Project.
* 'formula' (str) => formula.
* 'natoms' (int) => number of atoms.
* 'energy_pa' (float) => formation energy per atom.
## Data ('data.hdf5')
fields:
* 'structures' => a group containing structure information.
* 'structures/cell' (float32) => lattice of the material.
* 'structures/natoms' (int32) => number of atoms.
* 'structures/energy_pa' (float32) => formation energy per atom.
* 'structures/atoms_ptr' (int64) => position of the first atom of the structures in the 'atoms' group.
* 'atoms' => a group containing information about atoms.
* 'atoms/positions' (float32) => the positions of the atoms.
* 'atoms/atomic_number' (uint8) => the atomic number of the atoms.
| [
"# Dataset\n\nMaterials project (2019 dump)\n\nThis dataset contains 133420 materials with formation energy per atom.\n\nProcessed from URL",
"# Download\n\nDownload link: URL\n\nMD5 checksum 'c132f3781f32cd17f3a92aa6501b9531'",
"# Content\n\nBundled in 'URL'.",
"## Index ('URL')\n\nlist of dict:\n* 'index' (int) => index of the structure in data file.\n* 'id' (str) => id of Materials Project.\n* 'formula' (str) => formula.\n* 'natoms' (int) => number of atoms.\n* 'energy_pa' (float) => formation energy per atom.",
"## Data ('data.hdf5')\n\nfields:\n* 'structures' => a group containing structure information.\n* 'structures/cell' (float32) => lattice of the material.\n* 'structures/natoms' (int32) => number of atoms.\n* 'structures/energy_pa' (float32) => formation energy per atom.\n* 'structures/atoms_ptr' (int64) => position of the first atom of the structures in the 'atoms' group.\n* 'atoms' => a group containing information about atoms.\n* 'atoms/positions' (float32) => the positions of the atoms.\n* 'atoms/atomic_number' (uint8) => the atomic number of the atoms."
]
| [
"TAGS\n#size_categories-100K<n<1M #license-mit #chemistry #region-us \n",
"# Dataset\n\nMaterials project (2019 dump)\n\nThis dataset contains 133420 materials with formation energy per atom.\n\nProcessed from URL",
"# Download\n\nDownload link: URL\n\nMD5 checksum 'c132f3781f32cd17f3a92aa6501b9531'",
"# Content\n\nBundled in 'URL'.",
"## Index ('URL')\n\nlist of dict:\n* 'index' (int) => index of the structure in data file.\n* 'id' (str) => id of Materials Project.\n* 'formula' (str) => formula.\n* 'natoms' (int) => number of atoms.\n* 'energy_pa' (float) => formation energy per atom.",
"## Data ('data.hdf5')\n\nfields:\n* 'structures' => a group containing structure information.\n* 'structures/cell' (float32) => lattice of the material.\n* 'structures/natoms' (int32) => number of atoms.\n* 'structures/energy_pa' (float32) => formation energy per atom.\n* 'structures/atoms_ptr' (int64) => position of the first atom of the structures in the 'atoms' group.\n* 'atoms' => a group containing information about atoms.\n* 'atoms/positions' (float32) => the positions of the atoms.\n* 'atoms/atomic_number' (uint8) => the atomic number of the atoms."
]
| [
27,
29,
32,
9,
84,
175
]
| [
"passage: TAGS\n#size_categories-100K<n<1M #license-mit #chemistry #region-us \n# Dataset\n\nMaterials project (2019 dump)\n\nThis dataset contains 133420 materials with formation energy per atom.\n\nProcessed from URL# Download\n\nDownload link: URL\n\nMD5 checksum 'c132f3781f32cd17f3a92aa6501b9531'# Content\n\nBundled in 'URL'.## Index ('URL')\n\nlist of dict:\n* 'index' (int) => index of the structure in data file.\n* 'id' (str) => id of Materials Project.\n* 'formula' (str) => formula.\n* 'natoms' (int) => number of atoms.\n* 'energy_pa' (float) => formation energy per atom.## Data ('data.hdf5')\n\nfields:\n* 'structures' => a group containing structure information.\n* 'structures/cell' (float32) => lattice of the material.\n* 'structures/natoms' (int32) => number of atoms.\n* 'structures/energy_pa' (float32) => formation energy per atom.\n* 'structures/atoms_ptr' (int64) => position of the first atom of the structures in the 'atoms' group.\n* 'atoms' => a group containing information about atoms.\n* 'atoms/positions' (float32) => the positions of the atoms.\n* 'atoms/atomic_number' (uint8) => the atomic number of the atoms."
]
|
a8f3162e283152f9a1ef51c696d26abe55c68b7f | ## Introduction
- [GitHub Repo](https://github.com/thunlp/UltraFeedback)
- [UltraRM-13b](https://huggingface.co/openbmb/UltraRM-13b)
- [UltraCM-13b](https://huggingface.co/openbmb/UltraCM-13b)
UltraFeedback is a **large-scale, fine-grained, diverse preference dataset**, used for training powerful reward models and critic models. We collect about 64k prompts from diverse resources (including UltraChat, ShareGPT, Evol-Instruct, TruthfulQA, FalseQA, and FLAN). We then use these prompts to query multiple LLMs (see Table for model lists) and generate 4 different responses for each prompt, resulting in a total of 256k samples.
To collect high-quality preference and textual feedback, we design a fine-grained annotation instruction, which contains 4 different aspects, namely **instruction-following**, **truthfulness**, **honesty** and **helpfulness**. We then ask GPT-4 to annotate the collected samples based on the instructions.
## Features
- 🆚 **Scale**: UltraFeedback consists of 64k prompts, 256k responses and 380k high-quality feedback. RLHF researchers could further construct around 1 million comparison pairs to train their reward models.
- 🌈 **Diversity**: As a preference dataset, diversity is the core requirement for UltraFeedback. We collect prompts from various sources and query a diverse set of state-of-the-art open-source and prestigious models. To further increase diversity, we intended to select different base models, i.e., LLaMA, Falcon, StarChat, MPT, GPT and Bard. We also apply various principles to stimulate models completing instructions in different ways.
- 🤯 **High-density**: UltraFeedback provides both numerical and textual feedback. Moreover, we wrote fine-grained annotation documents to help rate responses in all dimensions
## Dataset Construction
### Instruction Sampling
We sample 63,967 instructions from 6 public available and high-quality datasets. We include all instructions from TruthfulQA and FalseQA, randomly sampling 10k instructions from Evol-Instruct, 10k from UltraChat, and 20k from ShareGPT. For Flan, we adopt a stratified sampling strtegy, randomly samping 3k instructions from"Co" subset whereas sampling 10 instructions per task for the other three subsets, excluding those with overly long instructions.
```json
{
"evol_instruct": 10000,
"false_qa": 2339,
"flan": 20939,
"sharegpt": 19949,
"truthful_qa": 811,
"ultrachat": 9929
}
```
### Model Sampling
To prevent reward model from overfiting to certain text style or capturing spurious correlation between text style and rewards, we select different base models of all levels, with varying sizes, architectures and training data, to complete the instructions. We set up a pool of 17 models:
- Commercial Models: GPT-4, GPT-3.5 Turbo, Bard
- LLaMA family:
1. LLaMA-2-7B-chat, LLaMA-2-13B-chat, LLaMA-2-70B-chat
2. UltraLM-13B, UltraLM-65B
3. WizardLM-7B, WizardLM-13B, WizardLM-70B
4. Vicuna-33B
5. Alpaca-7B
- Non-LLaMA series:
1. Falcon-40B-instruct
2. MPT-30B-chat
3. StarChat-Beta
4. Pythia-12B
### Principle Sampling
Following [1] and [2], we define a set of principles to explicitly align model behaviors from different aspects. We set up a pool of 5 principles: Helpfulness, Truthfulness, Honesty, Verbalized Calibration and Harmless. For each instruction, we randomly sample 4 models to complete the instruction, and for each completion, we sample a principle and add it to system prompt to align the model behavior. Considering different datasets outline different characteristics, not all dataset are suitable for all principles. We provide the following table to show the principle distribution for each dataset.
| Datset | Principle |
| ------------- | ------------------------------------------------------------ |
| Evol Instruct | 100% Helpful |
| FalseQA | 100% TruthfulQA |
| Flan | 60% Helpful, 20% Truthful, 20% Verbalized Calibration |
| ShareGPT | 60% Helpful, 20% Truthful, 18% Honesty, 2% Verbalized Calibration |
| TruthfulQA | 100% Truthful |
| UltraChat | 60% Helpful, 20% Truthful, 18% Honesty, 2% Verbalized Calibration |
[1] Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision. Sun et al.
[2] Orca: Progressive Learning from Complex Explanation Traces of GPT-4. Mukherjee et al.
## Dataset Format
```JSONC
{
"source": "sharegpt", // the dataset where the instruction comes from
"id": "D2TC9GP_0", // the id of the instruction in the source dataset
"instruction": "I am going to cairo in June of this year, thinking of 4-5 days, what are the best things I could do? Going with family of 5, granddad, grandmom, sister, mom and me. No days can have too much walking but sister, me and mom like adventure sports",
"models": [ // four models that are randomly sampled from the model pool to complete the instruction
"falcon-40b-instruct",
"gpt-4",
"starchat",
"wizardlm-7b"
],
"completions": [ // four completions corresponding to the four models
{ // completion 1
"model": "falcon-40b-instruct",
"principle": "helpfulness", // priciple that used to align model behavior
"custom_system_prompt": "As an AI assistant, ensure your response offers the perfect blend of accuracy, positivity, and intrigue. Strive to be educational while keeping the user engaged.", // the system prompt corresponding to the principle, sampled from a pool of GPT-4 generated system prompts
"response": "Cairo is a city that has something for everyone. Some of the best things you can do are visiting the Pyramids of Giza and the museums housing the ancient Egyptian artifacts, taking a camel ride in the desert, visiting the medieval neighborhoods such as Islamic Cairo or Khan El-Khalili, trying local food in traditional restaurants like Ful medames, or venturing into one of the city's many souks to shop for authentic goods. For adventure sports, you can go whitewater rafting on the Nile, hot air ballooning over the desert, or diving in the Red Sea. Just be mindful of the scorching heat and ensure everyone wears appropriate clothing and drinks enough water. Above all, have fun and enjoy your trip!",
"annotations": {
"instruction_following": [
{
"Rating": "2",
"Rationale": "The text only partially addresses the task goal by providing a general list of activities in Cairo without considering the specific needs of the family members, such as limited walking ability and adventure preferences."
}
],
"honesty": [
{
"Rating": "3",
"Rationale": "The response is confident but contains subtle mistakes. It suggests whitewater rafting on the Nile and diving in the Red Sea, which are not feasible for a 4-5 day trip to Cairo. The rest of the suggestions are accurate and relevant to the family's preferences."
}
],
"truthfulness": [
{
"Type": [
"1",
"2"
],
"Rationale": "The text suggests whitewater rafting on the Nile and diving in the Red Sea, which are not available in Cairo. These activities are factually incorrect and not aligned with the instruction.",
"Rating": "3",
"Rationale For Rating": "The text provides some truthful information, but the inclusion of whitewater rafting and diving in the Red Sea causes partial hallucination and misunderstanding."
}
],
"helpfulness": [
{
"Type": [
"1",
"2"
],
"Rationale": "The response is clear and relevant, providing useful information, but it does not address the limited walking ability of the grandparents or the adventure sports preferences of the sister, mom, and the user.",
"Rating": "3",
"Rationale For Rating": "The text is correct and provides useful information, but it does not fully address the specific needs and preferences of the family."
}
]
}
},
{ // completion 2
...
},
{ // completion 3
...
},
{ // completion 4
...
}
]
},
```
## Dataset Example
Here we present an example of UltraFeedback
> **User:** I am going to cairo in June of this year, thinking of 4-5 days, what are the best things I could do? Going with family of 5, granddad, grandmom, sister, mom and me. No days can have too much walking but sister, me and mom like adventure sports
> **Assisstant 1 (falcon-40b-instruct):** Cairo is a city that has something for everyone. Some of the best things you can do are visiting the Pyramids of Giza and the museums housing the ancient Egyptian artifacts, taking a camel ride in the desert, visiting the medieval neighborhoods such as Islamic Cairo or Khan El-Khalili, trying local food in traditional restaurants like Ful medames, or venturing into one of the city's many souks to shop for authentic goods. For adventure sports, you can go whitewater rafting on the Nile, hot air ballooning over the desert, or diving in the Red Sea. Just be mindful of the scorching heat and ensure everyone wears appropriate clothing and drinks enough water. Above all, have fun and enjoy your trip!
> **Assisstant 2 (gpt-4):** In Cairo for 4-5 days, the following activities cater to your family's preferences, with consideration for the limited walking ability of your grandparents:\n\n1. Visit the Pyramids of Giza and the Great Sphinx: A must-see in Cairo, it's an incredible experience for all ages. You can explore the premises using a hired camel, horse, or a horse-drawn carriage which accommodate the needs of your grandparents while keeping the experience enjoyable.\n\n2. Egyptian Museum: This museum in Tahrir Square contains over 120,000 artifacts of Egypt's history. A wheelchair can be arranged for the grandparents to comfortably explore the museum at their own pace.\n\n3. Khan el-Khalili Bazaar: A famous market in Cairo, suitable for souvenir shopping and experiencing the local culture. There are many cafes and restaurants your family can rest at during the visit.\n\n4. Nile River Cruise: Enjoy a dinner cruise with your family on the famous Nile River. It includes entertainment and a beautiful view of the city by night. Boats often have ramps and accessible facilities for limited mobility passangers.\n\n5. Felucca Ride: During the day, go for a more traditional sailboat ride called a Felucca. This can be done on an off-day or as a relaxation activity after an active day.\n\nFor adventure activities, you and your mom and sister can explore these options:\n\n1. Sandboarding: Head to the desert for a thrilling sandboarding experience on the dunes just outside Cairo.\n\n2. Hot Air Balloon Ride: Experience Egypt from above with a hot air balloon ride in Luxor. It would require a day trip from Cairo but is an incredible experience.\n\n3. Quad Biking: Go for a guided quad biking adventure in the desert, enjoying panoramic views of the iconic Pyramids.\n\nSpace the activities according to your family's preferences and ensure rest periods in the afternoon. Consider hiring a private guide or joining a wheelchair-accessible tour since you're concerned about your grandparents' walking ability.
> **Assisstant 3 (starchat):** Here are some of the top things to do in Cairo, Egypt, with a family of 5:\n\n- Giza Pyramids: The Giza Pyramids, located just outside Cairo, are a must-see for anyone visiting the city. The Great Pyramid of Khufu, the Great Pyramid of Khafre, and the Great Pyramid of Menkaure make up one of the Seven Wonders of the Ancient World and are an impressive sight to behold.\n\n- Egyptian Museum: The Egyptian Museum, located in the heart of Cairo, is home to one of the world's largest and most impressive collections of ancient Egyptian art and artifacts. Visitors can explore the museum's extensive collection, which includes items such as mummies, sarcophagi, statues, paintings, and more.\n\n- Citadel of Salah El-Din: Located on a hill overlooking Cairo, the Citadel of Salah El-Din offers stunning views of the city and is an excellent place to get a sense of Cairo's rich history. Visitors can explore the various museums and mosques within the citadel, and also take a stroll through the beautiful gardens.\n\n- Khan El-Khalili Bazaar: Khan El-Khalili Bazaar is one of Cairo's most popular attractions, a bustling market where visitors can find everything from spices and pottery to clothes and jewelry. It's a great place to do some shopping and explore the local culture.\n\n- Alexandria: Just an hour's drive from Cairo, Alexandria is a historic city with a rich legacy. Visitors can explore the ancient Roman Amphitheater, the Qaitbay Citadel, and the Catacombs of Kom el-Shoqafa. They can also take a boat trip to the island of Montazah, where they can see the famous Graffiti Wall and other street art.\n\nAll of these attractions are family-friendly, with plenty of things to see and do. The Giza Pyramids in particular are very accessible and well-maintained, making them suitable for all ages.
> **Assisstant 4 (wizardlm-7b):** If you are looking for adventure sports, you can explore the pyramids of Giza and go hot air ballooning over the desert. Another option could be to take a camel ride in the desert or go kayaking in the Nile River. For family-friendly activities, you can visit the Egyptian Museum, Cairo Tower, and Khan el-Khalili Bazaar. Additionally, you can take a day trip to the historical city of Alexandria and visit the Pompey's Pillar and the Roman Amphitheater. Another option could be to go on a guided tour of the ancient city of Petra in Jordan, which is a short flight from Cairo.
## Limitations
- Although GPT-4 can provide well-aligned annotation and textual feedback for most samples, we must note that GPT-4 also makes mistakes and provides inaccurate feedbacks. | zhengr/UltraFeedback | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"region:us"
]
| 2023-11-08T15:14:57+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"]} | 2023-11-08T15:15:16+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-mit #region-us
| Introduction
------------
* GitHub Repo
* UltraRM-13b
* UltraCM-13b
UltraFeedback is a large-scale, fine-grained, diverse preference dataset, used for training powerful reward models and critic models. We collect about 64k prompts from diverse resources (including UltraChat, ShareGPT, Evol-Instruct, TruthfulQA, FalseQA, and FLAN). We then use these prompts to query multiple LLMs (see Table for model lists) and generate 4 different responses for each prompt, resulting in a total of 256k samples.
To collect high-quality preference and textual feedback, we design a fine-grained annotation instruction, which contains 4 different aspects, namely instruction-following, truthfulness, honesty and helpfulness. We then ask GPT-4 to annotate the collected samples based on the instructions.
Features
--------
* 🆚 Scale: UltraFeedback consists of 64k prompts, 256k responses and 380k high-quality feedback. RLHF researchers could further construct around 1 million comparison pairs to train their reward models.
* Diversity: As a preference dataset, diversity is the core requirement for UltraFeedback. We collect prompts from various sources and query a diverse set of state-of-the-art open-source and prestigious models. To further increase diversity, we intended to select different base models, i.e., LLaMA, Falcon, StarChat, MPT, GPT and Bard. We also apply various principles to stimulate models completing instructions in different ways.
* High-density: UltraFeedback provides both numerical and textual feedback. Moreover, we wrote fine-grained annotation documents to help rate responses in all dimensions
Dataset Construction
--------------------
### Instruction Sampling
We sample 63,967 instructions from 6 public available and high-quality datasets. We include all instructions from TruthfulQA and FalseQA, randomly sampling 10k instructions from Evol-Instruct, 10k from UltraChat, and 20k from ShareGPT. For Flan, we adopt a stratified sampling strtegy, randomly samping 3k instructions from"Co" subset whereas sampling 10 instructions per task for the other three subsets, excluding those with overly long instructions.
### Model Sampling
To prevent reward model from overfiting to certain text style or capturing spurious correlation between text style and rewards, we select different base models of all levels, with varying sizes, architectures and training data, to complete the instructions. We set up a pool of 17 models:
* Commercial Models: GPT-4, GPT-3.5 Turbo, Bard
* LLaMA family:
1. LLaMA-2-7B-chat, LLaMA-2-13B-chat, LLaMA-2-70B-chat
2. UltraLM-13B, UltraLM-65B
3. WizardLM-7B, WizardLM-13B, WizardLM-70B
4. Vicuna-33B
5. Alpaca-7B
* Non-LLaMA series:
1. Falcon-40B-instruct
2. MPT-30B-chat
3. StarChat-Beta
4. Pythia-12B
### Principle Sampling
Following [1] and [2], we define a set of principles to explicitly align model behaviors from different aspects. We set up a pool of 5 principles: Helpfulness, Truthfulness, Honesty, Verbalized Calibration and Harmless. For each instruction, we randomly sample 4 models to complete the instruction, and for each completion, we sample a principle and add it to system prompt to align the model behavior. Considering different datasets outline different characteristics, not all dataset are suitable for all principles. We provide the following table to show the principle distribution for each dataset.
[1] Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision. Sun et al.
[2] Orca: Progressive Learning from Complex Explanation Traces of GPT-4. Mukherjee et al.
Dataset Format
--------------
Dataset Example
---------------
Here we present an example of UltraFeedback
>
> User: I am going to cairo in June of this year, thinking of 4-5 days, what are the best things I could do? Going with family of 5, granddad, grandmom, sister, mom and me. No days can have too much walking but sister, me and mom like adventure sports
>
>
>
>
> Assisstant 1 (falcon-40b-instruct): Cairo is a city that has something for everyone. Some of the best things you can do are visiting the Pyramids of Giza and the museums housing the ancient Egyptian artifacts, taking a camel ride in the desert, visiting the medieval neighborhoods such as Islamic Cairo or Khan El-Khalili, trying local food in traditional restaurants like Ful medames, or venturing into one of the city's many souks to shop for authentic goods. For adventure sports, you can go whitewater rafting on the Nile, hot air ballooning over the desert, or diving in the Red Sea. Just be mindful of the scorching heat and ensure everyone wears appropriate clothing and drinks enough water. Above all, have fun and enjoy your trip!
>
>
>
>
> Assisstant 2 (gpt-4): In Cairo for 4-5 days, the following activities cater to your family's preferences, with consideration for the limited walking ability of your grandparents:\n\n1. Visit the Pyramids of Giza and the Great Sphinx: A must-see in Cairo, it's an incredible experience for all ages. You can explore the premises using a hired camel, horse, or a horse-drawn carriage which accommodate the needs of your grandparents while keeping the experience enjoyable.\n\n2. Egyptian Museum: This museum in Tahrir Square contains over 120,000 artifacts of Egypt's history. A wheelchair can be arranged for the grandparents to comfortably explore the museum at their own pace.\n\n3. Khan el-Khalili Bazaar: A famous market in Cairo, suitable for souvenir shopping and experiencing the local culture. There are many cafes and restaurants your family can rest at during the visit.\n\n4. Nile River Cruise: Enjoy a dinner cruise with your family on the famous Nile River. It includes entertainment and a beautiful view of the city by night. Boats often have ramps and accessible facilities for limited mobility passangers.\n\n5. Felucca Ride: During the day, go for a more traditional sailboat ride called a Felucca. This can be done on an off-day or as a relaxation activity after an active day.\n\nFor adventure activities, you and your mom and sister can explore these options:\n\n1. Sandboarding: Head to the desert for a thrilling sandboarding experience on the dunes just outside Cairo.\n\n2. Hot Air Balloon Ride: Experience Egypt from above with a hot air balloon ride in Luxor. It would require a day trip from Cairo but is an incredible experience.\n\n3. Quad Biking: Go for a guided quad biking adventure in the desert, enjoying panoramic views of the iconic Pyramids.\n\nSpace the activities according to your family's preferences and ensure rest periods in the afternoon. Consider hiring a private guide or joining a wheelchair-accessible tour since you're concerned about your grandparents' walking ability.
>
>
>
>
> Assisstant 3 (starchat): Here are some of the top things to do in Cairo, Egypt, with a family of 5:\n\n- Giza Pyramids: The Giza Pyramids, located just outside Cairo, are a must-see for anyone visiting the city. The Great Pyramid of Khufu, the Great Pyramid of Khafre, and the Great Pyramid of Menkaure make up one of the Seven Wonders of the Ancient World and are an impressive sight to behold.\n\n- Egyptian Museum: The Egyptian Museum, located in the heart of Cairo, is home to one of the world's largest and most impressive collections of ancient Egyptian art and artifacts. Visitors can explore the museum's extensive collection, which includes items such as mummies, sarcophagi, statues, paintings, and more.\n\n- Citadel of Salah El-Din: Located on a hill overlooking Cairo, the Citadel of Salah El-Din offers stunning views of the city and is an excellent place to get a sense of Cairo's rich history. Visitors can explore the various museums and mosques within the citadel, and also take a stroll through the beautiful gardens.\n\n- Khan El-Khalili Bazaar: Khan El-Khalili Bazaar is one of Cairo's most popular attractions, a bustling market where visitors can find everything from spices and pottery to clothes and jewelry. It's a great place to do some shopping and explore the local culture.\n\n- Alexandria: Just an hour's drive from Cairo, Alexandria is a historic city with a rich legacy. Visitors can explore the ancient Roman Amphitheater, the Qaitbay Citadel, and the Catacombs of Kom el-Shoqafa. They can also take a boat trip to the island of Montazah, where they can see the famous Graffiti Wall and other street art.\n\nAll of these attractions are family-friendly, with plenty of things to see and do. The Giza Pyramids in particular are very accessible and well-maintained, making them suitable for all ages.
>
>
>
>
> Assisstant 4 (wizardlm-7b): If you are looking for adventure sports, you can explore the pyramids of Giza and go hot air ballooning over the desert. Another option could be to take a camel ride in the desert or go kayaking in the Nile River. For family-friendly activities, you can visit the Egyptian Museum, Cairo Tower, and Khan el-Khalili Bazaar. Additionally, you can take a day trip to the historical city of Alexandria and visit the Pompey's Pillar and the Roman Amphitheater. Another option could be to go on a guided tour of the ancient city of Petra in Jordan, which is a short flight from Cairo.
>
>
>
Limitations
-----------
* Although GPT-4 can provide well-aligned annotation and textual feedback for most samples, we must note that GPT-4 also makes mistakes and provides inaccurate feedbacks.
| [
"### Instruction Sampling\n\n\nWe sample 63,967 instructions from 6 public available and high-quality datasets. We include all instructions from TruthfulQA and FalseQA, randomly sampling 10k instructions from Evol-Instruct, 10k from UltraChat, and 20k from ShareGPT. For Flan, we adopt a stratified sampling strtegy, randomly samping 3k instructions from\"Co\" subset whereas sampling 10 instructions per task for the other three subsets, excluding those with overly long instructions.",
"### Model Sampling\n\n\nTo prevent reward model from overfiting to certain text style or capturing spurious correlation between text style and rewards, we select different base models of all levels, with varying sizes, architectures and training data, to complete the instructions. We set up a pool of 17 models:\n\n\n* Commercial Models: GPT-4, GPT-3.5 Turbo, Bard\n* LLaMA family:\n\t1. LLaMA-2-7B-chat, LLaMA-2-13B-chat, LLaMA-2-70B-chat\n\t2. UltraLM-13B, UltraLM-65B\n\t3. WizardLM-7B, WizardLM-13B, WizardLM-70B\n\t4. Vicuna-33B\n\t5. Alpaca-7B\n* Non-LLaMA series:\n\t1. Falcon-40B-instruct\n\t2. MPT-30B-chat\n\t3. StarChat-Beta\n\t4. Pythia-12B",
"### Principle Sampling\n\n\nFollowing [1] and [2], we define a set of principles to explicitly align model behaviors from different aspects. We set up a pool of 5 principles: Helpfulness, Truthfulness, Honesty, Verbalized Calibration and Harmless. For each instruction, we randomly sample 4 models to complete the instruction, and for each completion, we sample a principle and add it to system prompt to align the model behavior. Considering different datasets outline different characteristics, not all dataset are suitable for all principles. We provide the following table to show the principle distribution for each dataset.\n\n\n\n[1] Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision. Sun et al.\n\n\n[2] Orca: Progressive Learning from Complex Explanation Traces of GPT-4. Mukherjee et al.\n\n\nDataset Format\n--------------\n\n\nDataset Example\n---------------\n\n\nHere we present an example of UltraFeedback\n\n\n\n> \n> User: I am going to cairo in June of this year, thinking of 4-5 days, what are the best things I could do? Going with family of 5, granddad, grandmom, sister, mom and me. No days can have too much walking but sister, me and mom like adventure sports\n> \n> \n> \n\n\n\n> \n> Assisstant 1 (falcon-40b-instruct): Cairo is a city that has something for everyone. Some of the best things you can do are visiting the Pyramids of Giza and the museums housing the ancient Egyptian artifacts, taking a camel ride in the desert, visiting the medieval neighborhoods such as Islamic Cairo or Khan El-Khalili, trying local food in traditional restaurants like Ful medames, or venturing into one of the city's many souks to shop for authentic goods. For adventure sports, you can go whitewater rafting on the Nile, hot air ballooning over the desert, or diving in the Red Sea. Just be mindful of the scorching heat and ensure everyone wears appropriate clothing and drinks enough water. Above all, have fun and enjoy your trip!\n> \n> \n> \n\n\n\n> \n> Assisstant 2 (gpt-4): In Cairo for 4-5 days, the following activities cater to your family's preferences, with consideration for the limited walking ability of your grandparents:\\n\\n1. Visit the Pyramids of Giza and the Great Sphinx: A must-see in Cairo, it's an incredible experience for all ages. You can explore the premises using a hired camel, horse, or a horse-drawn carriage which accommodate the needs of your grandparents while keeping the experience enjoyable.\\n\\n2. Egyptian Museum: This museum in Tahrir Square contains over 120,000 artifacts of Egypt's history. A wheelchair can be arranged for the grandparents to comfortably explore the museum at their own pace.\\n\\n3. Khan el-Khalili Bazaar: A famous market in Cairo, suitable for souvenir shopping and experiencing the local culture. There are many cafes and restaurants your family can rest at during the visit.\\n\\n4. Nile River Cruise: Enjoy a dinner cruise with your family on the famous Nile River. It includes entertainment and a beautiful view of the city by night. Boats often have ramps and accessible facilities for limited mobility passangers.\\n\\n5. Felucca Ride: During the day, go for a more traditional sailboat ride called a Felucca. This can be done on an off-day or as a relaxation activity after an active day.\\n\\nFor adventure activities, you and your mom and sister can explore these options:\\n\\n1. Sandboarding: Head to the desert for a thrilling sandboarding experience on the dunes just outside Cairo.\\n\\n2. Hot Air Balloon Ride: Experience Egypt from above with a hot air balloon ride in Luxor. It would require a day trip from Cairo but is an incredible experience.\\n\\n3. Quad Biking: Go for a guided quad biking adventure in the desert, enjoying panoramic views of the iconic Pyramids.\\n\\nSpace the activities according to your family's preferences and ensure rest periods in the afternoon. Consider hiring a private guide or joining a wheelchair-accessible tour since you're concerned about your grandparents' walking ability.\n> \n> \n> \n\n\n\n> \n> Assisstant 3 (starchat): Here are some of the top things to do in Cairo, Egypt, with a family of 5:\\n\\n- Giza Pyramids: The Giza Pyramids, located just outside Cairo, are a must-see for anyone visiting the city. The Great Pyramid of Khufu, the Great Pyramid of Khafre, and the Great Pyramid of Menkaure make up one of the Seven Wonders of the Ancient World and are an impressive sight to behold.\\n\\n- Egyptian Museum: The Egyptian Museum, located in the heart of Cairo, is home to one of the world's largest and most impressive collections of ancient Egyptian art and artifacts. Visitors can explore the museum's extensive collection, which includes items such as mummies, sarcophagi, statues, paintings, and more.\\n\\n- Citadel of Salah El-Din: Located on a hill overlooking Cairo, the Citadel of Salah El-Din offers stunning views of the city and is an excellent place to get a sense of Cairo's rich history. Visitors can explore the various museums and mosques within the citadel, and also take a stroll through the beautiful gardens.\\n\\n- Khan El-Khalili Bazaar: Khan El-Khalili Bazaar is one of Cairo's most popular attractions, a bustling market where visitors can find everything from spices and pottery to clothes and jewelry. It's a great place to do some shopping and explore the local culture.\\n\\n- Alexandria: Just an hour's drive from Cairo, Alexandria is a historic city with a rich legacy. Visitors can explore the ancient Roman Amphitheater, the Qaitbay Citadel, and the Catacombs of Kom el-Shoqafa. They can also take a boat trip to the island of Montazah, where they can see the famous Graffiti Wall and other street art.\\n\\nAll of these attractions are family-friendly, with plenty of things to see and do. The Giza Pyramids in particular are very accessible and well-maintained, making them suitable for all ages.\n> \n> \n> \n\n\n\n> \n> Assisstant 4 (wizardlm-7b): If you are looking for adventure sports, you can explore the pyramids of Giza and go hot air ballooning over the desert. Another option could be to take a camel ride in the desert or go kayaking in the Nile River. For family-friendly activities, you can visit the Egyptian Museum, Cairo Tower, and Khan el-Khalili Bazaar. Additionally, you can take a day trip to the historical city of Alexandria and visit the Pompey's Pillar and the Roman Amphitheater. Another option could be to go on a guided tour of the ancient city of Petra in Jordan, which is a short flight from Cairo.\n> \n> \n> \n\n\nLimitations\n-----------\n\n\n* Although GPT-4 can provide well-aligned annotation and textual feedback for most samples, we must note that GPT-4 also makes mistakes and provides inaccurate feedbacks."
]
| [
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-mit #region-us \n",
"### Instruction Sampling\n\n\nWe sample 63,967 instructions from 6 public available and high-quality datasets. We include all instructions from TruthfulQA and FalseQA, randomly sampling 10k instructions from Evol-Instruct, 10k from UltraChat, and 20k from ShareGPT. For Flan, we adopt a stratified sampling strtegy, randomly samping 3k instructions from\"Co\" subset whereas sampling 10 instructions per task for the other three subsets, excluding those with overly long instructions.",
"### Model Sampling\n\n\nTo prevent reward model from overfiting to certain text style or capturing spurious correlation between text style and rewards, we select different base models of all levels, with varying sizes, architectures and training data, to complete the instructions. We set up a pool of 17 models:\n\n\n* Commercial Models: GPT-4, GPT-3.5 Turbo, Bard\n* LLaMA family:\n\t1. LLaMA-2-7B-chat, LLaMA-2-13B-chat, LLaMA-2-70B-chat\n\t2. UltraLM-13B, UltraLM-65B\n\t3. WizardLM-7B, WizardLM-13B, WizardLM-70B\n\t4. Vicuna-33B\n\t5. Alpaca-7B\n* Non-LLaMA series:\n\t1. Falcon-40B-instruct\n\t2. MPT-30B-chat\n\t3. StarChat-Beta\n\t4. Pythia-12B",
"### Principle Sampling\n\n\nFollowing [1] and [2], we define a set of principles to explicitly align model behaviors from different aspects. We set up a pool of 5 principles: Helpfulness, Truthfulness, Honesty, Verbalized Calibration and Harmless. For each instruction, we randomly sample 4 models to complete the instruction, and for each completion, we sample a principle and add it to system prompt to align the model behavior. Considering different datasets outline different characteristics, not all dataset are suitable for all principles. We provide the following table to show the principle distribution for each dataset.\n\n\n\n[1] Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision. Sun et al.\n\n\n[2] Orca: Progressive Learning from Complex Explanation Traces of GPT-4. Mukherjee et al.\n\n\nDataset Format\n--------------\n\n\nDataset Example\n---------------\n\n\nHere we present an example of UltraFeedback\n\n\n\n> \n> User: I am going to cairo in June of this year, thinking of 4-5 days, what are the best things I could do? Going with family of 5, granddad, grandmom, sister, mom and me. No days can have too much walking but sister, me and mom like adventure sports\n> \n> \n> \n\n\n\n> \n> Assisstant 1 (falcon-40b-instruct): Cairo is a city that has something for everyone. Some of the best things you can do are visiting the Pyramids of Giza and the museums housing the ancient Egyptian artifacts, taking a camel ride in the desert, visiting the medieval neighborhoods such as Islamic Cairo or Khan El-Khalili, trying local food in traditional restaurants like Ful medames, or venturing into one of the city's many souks to shop for authentic goods. For adventure sports, you can go whitewater rafting on the Nile, hot air ballooning over the desert, or diving in the Red Sea. Just be mindful of the scorching heat and ensure everyone wears appropriate clothing and drinks enough water. Above all, have fun and enjoy your trip!\n> \n> \n> \n\n\n\n> \n> Assisstant 2 (gpt-4): In Cairo for 4-5 days, the following activities cater to your family's preferences, with consideration for the limited walking ability of your grandparents:\\n\\n1. Visit the Pyramids of Giza and the Great Sphinx: A must-see in Cairo, it's an incredible experience for all ages. You can explore the premises using a hired camel, horse, or a horse-drawn carriage which accommodate the needs of your grandparents while keeping the experience enjoyable.\\n\\n2. Egyptian Museum: This museum in Tahrir Square contains over 120,000 artifacts of Egypt's history. A wheelchair can be arranged for the grandparents to comfortably explore the museum at their own pace.\\n\\n3. Khan el-Khalili Bazaar: A famous market in Cairo, suitable for souvenir shopping and experiencing the local culture. There are many cafes and restaurants your family can rest at during the visit.\\n\\n4. Nile River Cruise: Enjoy a dinner cruise with your family on the famous Nile River. It includes entertainment and a beautiful view of the city by night. Boats often have ramps and accessible facilities for limited mobility passangers.\\n\\n5. Felucca Ride: During the day, go for a more traditional sailboat ride called a Felucca. This can be done on an off-day or as a relaxation activity after an active day.\\n\\nFor adventure activities, you and your mom and sister can explore these options:\\n\\n1. Sandboarding: Head to the desert for a thrilling sandboarding experience on the dunes just outside Cairo.\\n\\n2. Hot Air Balloon Ride: Experience Egypt from above with a hot air balloon ride in Luxor. It would require a day trip from Cairo but is an incredible experience.\\n\\n3. Quad Biking: Go for a guided quad biking adventure in the desert, enjoying panoramic views of the iconic Pyramids.\\n\\nSpace the activities according to your family's preferences and ensure rest periods in the afternoon. Consider hiring a private guide or joining a wheelchair-accessible tour since you're concerned about your grandparents' walking ability.\n> \n> \n> \n\n\n\n> \n> Assisstant 3 (starchat): Here are some of the top things to do in Cairo, Egypt, with a family of 5:\\n\\n- Giza Pyramids: The Giza Pyramids, located just outside Cairo, are a must-see for anyone visiting the city. The Great Pyramid of Khufu, the Great Pyramid of Khafre, and the Great Pyramid of Menkaure make up one of the Seven Wonders of the Ancient World and are an impressive sight to behold.\\n\\n- Egyptian Museum: The Egyptian Museum, located in the heart of Cairo, is home to one of the world's largest and most impressive collections of ancient Egyptian art and artifacts. Visitors can explore the museum's extensive collection, which includes items such as mummies, sarcophagi, statues, paintings, and more.\\n\\n- Citadel of Salah El-Din: Located on a hill overlooking Cairo, the Citadel of Salah El-Din offers stunning views of the city and is an excellent place to get a sense of Cairo's rich history. Visitors can explore the various museums and mosques within the citadel, and also take a stroll through the beautiful gardens.\\n\\n- Khan El-Khalili Bazaar: Khan El-Khalili Bazaar is one of Cairo's most popular attractions, a bustling market where visitors can find everything from spices and pottery to clothes and jewelry. It's a great place to do some shopping and explore the local culture.\\n\\n- Alexandria: Just an hour's drive from Cairo, Alexandria is a historic city with a rich legacy. Visitors can explore the ancient Roman Amphitheater, the Qaitbay Citadel, and the Catacombs of Kom el-Shoqafa. They can also take a boat trip to the island of Montazah, where they can see the famous Graffiti Wall and other street art.\\n\\nAll of these attractions are family-friendly, with plenty of things to see and do. The Giza Pyramids in particular are very accessible and well-maintained, making them suitable for all ages.\n> \n> \n> \n\n\n\n> \n> Assisstant 4 (wizardlm-7b): If you are looking for adventure sports, you can explore the pyramids of Giza and go hot air ballooning over the desert. Another option could be to take a camel ride in the desert or go kayaking in the Nile River. For family-friendly activities, you can visit the Egyptian Museum, Cairo Tower, and Khan el-Khalili Bazaar. Additionally, you can take a day trip to the historical city of Alexandria and visit the Pompey's Pillar and the Roman Amphitheater. Another option could be to go on a guided tour of the ancient city of Petra in Jordan, which is a short flight from Cairo.\n> \n> \n> \n\n\nLimitations\n-----------\n\n\n* Although GPT-4 can provide well-aligned annotation and textual feedback for most samples, we must note that GPT-4 also makes mistakes and provides inaccurate feedbacks."
]
| [
38,
121,
190,
1691
]
| [
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-mit #region-us \n### Instruction Sampling\n\n\nWe sample 63,967 instructions from 6 public available and high-quality datasets. We include all instructions from TruthfulQA and FalseQA, randomly sampling 10k instructions from Evol-Instruct, 10k from UltraChat, and 20k from ShareGPT. For Flan, we adopt a stratified sampling strtegy, randomly samping 3k instructions from\"Co\" subset whereas sampling 10 instructions per task for the other three subsets, excluding those with overly long instructions.### Model Sampling\n\n\nTo prevent reward model from overfiting to certain text style or capturing spurious correlation between text style and rewards, we select different base models of all levels, with varying sizes, architectures and training data, to complete the instructions. We set up a pool of 17 models:\n\n\n* Commercial Models: GPT-4, GPT-3.5 Turbo, Bard\n* LLaMA family:\n\t1. LLaMA-2-7B-chat, LLaMA-2-13B-chat, LLaMA-2-70B-chat\n\t2. UltraLM-13B, UltraLM-65B\n\t3. WizardLM-7B, WizardLM-13B, WizardLM-70B\n\t4. Vicuna-33B\n\t5. Alpaca-7B\n* Non-LLaMA series:\n\t1. Falcon-40B-instruct\n\t2. MPT-30B-chat\n\t3. StarChat-Beta\n\t4. Pythia-12B"
]
|
a0e6dda9174f2ddcce3899adcf076007310251b2 | # Dataset Card for "bw_spec_cls_80_11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_11 | [
"region:us"
]
| 2023-11-08T15:30:52+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "27855", "1": "27856", "2": "27866", "3": "27945", "4": "27953", "5": "27975", "6": "27978", "7": "27981", "8": "27987", "9": "28241", "10": "28260", "11": "28477", "12": "28478", "13": "28479", "14": "28480", "15": "28481", "16": "28482", "17": "28483", "18": "28484", "19": "28485", "20": "28546", "21": "28548", "22": "28553", "23": "28571", "24": "28608", "25": "29045", "26": "29128", "27": "29180", "28": "29243", "29": "29245", "30": "29255", "31": "29271", "32": "29272", "33": "29355", "34": "29465", "35": "29480", "36": "29587", "37": "29602", "38": "29673", "39": "29718", "40": "29719", "41": "29720", "42": "29721", "43": "29738", "44": "29739", "45": "29740", "46": "29741", "47": "29742", "48": "29744", "49": "29745", "50": "29746", "51": "29747", "52": "29750", "53": "29752", "54": "29807", "55": "29813", "56": "29816", "57": "29961", "58": "29971", "59": "30041", "60": "30043", "61": "30050", "62": "30056", "63": "30058", "64": "30059", "65": "30090", "66": "30095", "67": "30120", "68": "30196", "69": "30198", "70": "30230", "71": "30486", "72": "30487", "73": "30488", "74": "30519", "75": "30520", "76": "30521", "77": "30522", "78": "30636", "79": "30690"}}}}], "splits": [{"name": "train", "num_bytes": 89109867.2, "num_examples": 1600}], "download_size": 88188426, "dataset_size": 89109867.2}} | 2023-11-08T15:30:59+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_11"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_11\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_11\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_11\"\n\nMore Information needed"
]
|
0bd5e141605f55ce04e845c65ac2c716ef89e8ec |
# Dataset Card for VLFeedback
- **Homepage:** https://vlf-silkie.github.io/
- **Repository:** https://github.com/vlf-silkie/VLFeedback
- **Paper:** https://arxiv.org/abs/2312.10665
## Dataset Summary
VLFeedback is a **large-scale vision-language preference dataset**, annotated by GPT-4V. It consists of 80k multi-modal instructions from various souces that encompass various capabilities of LVLMs.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/622f103fc78da4c7ebd7c887/kDCFRInpUoEVLaK-1T1Bp.png" alt="fig1" width="60%"/>
</p>
We build a model pool of 12 LVLMs and each data sample contains 4 responses from different models. Each response is annotated in three aspects: **helpfulness**, **visual faithfulness**, and **ethical considerations**. The resulting preference dataset contains **more than 380k comparison pairs**.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/622f103fc78da4c7ebd7c887/zOLje1p2ytJ27Ml2kJKhI.png" alt="fig2" width="60%"/>
</p>
## Citation
```
@article{2023vlfeedback,
author = {Lei Li and Zhihui Xie and Mukai Li and Shunian Chen and Peiyi Wang and Liang Chen and Yazheng Yang and Benyou Wang and Lingpeng Kong},
title = {Silkie: Preference Distillation for Large Visual Language Models},
publisher = {arXiv:2312.10665},
year = {2023}
}
``` | MMInstruction/VLFeedback | [
"task_categories:visual-question-answering",
"size_categories:100K<n<1M",
"arxiv:2312.10665",
"region:us"
]
| 2023-11-08T15:46:04+00:00 | {"size_categories": ["100K<n<1M"], "task_categories": ["visual-question-answering"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "models", "sequence": "string"}, {"name": "completions", "sequence": [{"name": "annotations", "struct": [{"name": "Helpfulness", "struct": [{"name": "Rating", "dtype": "string"}, {"name": "Rationale", "dtype": "string"}]}, {"name": "Ethical Considerations", "struct": [{"name": "Rating", "dtype": "string"}, {"name": "Rationale", "dtype": "string"}]}, {"name": "Visual Faithfulness", "struct": [{"name": "Rating", "dtype": "string"}, {"name": "Rationale", "dtype": "string"}]}]}, {"name": "model", "dtype": "string"}, {"name": "response", "dtype": "string"}]}, {"name": "img_path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 376466345, "num_examples": 80258}], "download_size": 3335765432, "dataset_size": 376466345}} | 2023-12-20T03:20:43+00:00 | [
"2312.10665"
]
| []
| TAGS
#task_categories-visual-question-answering #size_categories-100K<n<1M #arxiv-2312.10665 #region-us
|
# Dataset Card for VLFeedback
- Homepage: URL
- Repository: URL
- Paper: URL
## Dataset Summary
VLFeedback is a large-scale vision-language preference dataset, annotated by GPT-4V. It consists of 80k multi-modal instructions from various souces that encompass various capabilities of LVLMs.
<p align="center">
<img src="URL alt="fig1" width="60%"/>
</p>
We build a model pool of 12 LVLMs and each data sample contains 4 responses from different models. Each response is annotated in three aspects: helpfulness, visual faithfulness, and ethical considerations. The resulting preference dataset contains more than 380k comparison pairs.
<p align="center">
<img src="URL alt="fig2" width="60%"/>
</p>
| [
"# Dataset Card for VLFeedback\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL",
"## Dataset Summary\n\nVLFeedback is a large-scale vision-language preference dataset, annotated by GPT-4V. It consists of 80k multi-modal instructions from various souces that encompass various capabilities of LVLMs.\n\n<p align=\"center\">\n <img src=\"URL alt=\"fig1\" width=\"60%\"/>\n</p>\n\nWe build a model pool of 12 LVLMs and each data sample contains 4 responses from different models. Each response is annotated in three aspects: helpfulness, visual faithfulness, and ethical considerations. The resulting preference dataset contains more than 380k comparison pairs.\n\n<p align=\"center\">\n <img src=\"URL alt=\"fig2\" width=\"60%\"/>\n</p>"
]
| [
"TAGS\n#task_categories-visual-question-answering #size_categories-100K<n<1M #arxiv-2312.10665 #region-us \n",
"# Dataset Card for VLFeedback\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL",
"## Dataset Summary\n\nVLFeedback is a large-scale vision-language preference dataset, annotated by GPT-4V. It consists of 80k multi-modal instructions from various souces that encompass various capabilities of LVLMs.\n\n<p align=\"center\">\n <img src=\"URL alt=\"fig1\" width=\"60%\"/>\n</p>\n\nWe build a model pool of 12 LVLMs and each data sample contains 4 responses from different models. Each response is annotated in three aspects: helpfulness, visual faithfulness, and ethical considerations. The resulting preference dataset contains more than 380k comparison pairs.\n\n<p align=\"center\">\n <img src=\"URL alt=\"fig2\" width=\"60%\"/>\n</p>"
]
| [
42,
23,
182
]
| [
"passage: TAGS\n#task_categories-visual-question-answering #size_categories-100K<n<1M #arxiv-2312.10665 #region-us \n# Dataset Card for VLFeedback\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL## Dataset Summary\n\nVLFeedback is a large-scale vision-language preference dataset, annotated by GPT-4V. It consists of 80k multi-modal instructions from various souces that encompass various capabilities of LVLMs.\n\n<p align=\"center\">\n <img src=\"URL alt=\"fig1\" width=\"60%\"/>\n</p>\n\nWe build a model pool of 12 LVLMs and each data sample contains 4 responses from different models. Each response is annotated in three aspects: helpfulness, visual faithfulness, and ethical considerations. The resulting preference dataset contains more than 380k comparison pairs.\n\n<p align=\"center\">\n <img src=\"URL alt=\"fig2\" width=\"60%\"/>\n</p>"
]
|
eeaee52cceefe9718ef08bf5446f5fc9805364d4 | This dataset is generated by [Lilac](http://lilacml.com) for a HuggingFace Space: [huggingface.co/spaces/lilacai/nature-of-code](https://huggingface.co/spaces/lilacai/nature-of-code).
Lilac dataset config:
```namespace: local
name: noc-book-github
source:
repo: nature-of-code/noc-book-2023
ignore_directories:
- docs
- gatsby
- magicbook
- src
ignore_file_extensions:
- .png
github_token: ''
source_name: github
embeddings:
- path: content
embedding: gte-small
settings:
ui:
media_paths:
- content
markdown_paths: []
```
| lilacai/nature-of-code-local-noc-book-github | [
"region:us"
]
| 2023-11-08T15:47:53+00:00 | {} | 2023-11-08T15:48:10+00:00 | []
| []
| TAGS
#region-us
| This dataset is generated by Lilac for a HuggingFace Space: URL
Lilac dataset config:
| []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
f04c7ac08adac1b6f56b9047fb9a86a39761261b | # Dataset Card for "bw_spec_cls_80_12"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_12 | [
"region:us"
]
| 2023-11-08T15:55:55+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "30740", "1": "31040", "2": "31041", "3": "31042", "4": "31043", "5": "31044", "6": "31165", "7": "31356", "8": "31389", "9": "31390", "10": "31391", "11": "31392", "12": "31807", "13": "31887", "14": "31888", "15": "31889", "16": "31999", "17": "32001", "18": "32021", "19": "32075", "20": "32081", "21": "32218", "22": "32325", "23": "32326", "24": "32327", "25": "32328", "26": "32329", "27": "32330", "28": "32331", "29": "32332", "30": "32333", "31": "32334", "32": "32335", "33": "32336", "34": "32337", "35": "32338", "36": "32339", "37": "32340", "38": "32433", "39": "32437", "40": "32438", "41": "32439", "42": "32525", "43": "32686", "44": "32687", "45": "32689", "46": "32693", "47": "32694", "48": "32695", "49": "32755", "50": "32759", "51": "32760", "52": "32800", "53": "32882", "54": "33020", "55": "33049", "56": "33050", "57": "33064", "58": "33067", "59": "33068", "60": "33069", "61": "33070", "62": "33071", "63": "33072", "64": "33123", "65": "33124", "66": "33203", "67": "33216", "68": "33221", "69": "33278", "70": "33415", "71": "33422", "72": "33424", "73": "33426", "74": "33446", "75": "33459", "76": "33460", "77": "33461", "78": "33465", "79": "33477"}}}}], "splits": [{"name": "train", "num_bytes": 88063676.8, "num_examples": 1600}], "download_size": 88702877, "dataset_size": 88063676.8}} | 2023-11-08T15:56:02+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_12"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_12\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_12\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_12\"\n\nMore Information needed"
]
|
c8b93d62eb7c4353c0691588403098dcd4b91d9e | # Dataset Card for "uf-zephyr-iteration2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | argilla/uf-zephyr-iteration2 | [
"region:us"
]
| 2023-11-08T15:59:14+00:00 | {"dataset_info": {"features": [{"name": "input", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6238, "num_examples": 10}], "download_size": 6878, "dataset_size": 6238}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-08T16:09:25+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "uf-zephyr-iteration2"
More Information needed | [
"# Dataset Card for \"uf-zephyr-iteration2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"uf-zephyr-iteration2\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"uf-zephyr-iteration2\"\n\nMore Information needed"
]
|
1bb4b54c2639e2ced616a1924d2a63dab5cb5c1f |
# Scientific Openly-Licensed Publications
This repository contains companion material for the following [publication](https://openaccess.thecvf.com/content/WACV2024/papers/Tarsi_SciOL_and_MuLMS-Img_Introducing_a_Large-Scale_Multimodal_Scientific_Dataset_and_WACV_2024_paper.pdf):
> Tim Tarsi, Heike Adel, Jan Hendrik Metzen, Dan Zhang, Matteo Finco, Annemarie Friedrich. **SciOL and MuLMS-Img: Introducing A Large-Scale Multimodal Scientific Dataset and Models for Image-Text Tasks in the Scientific Domain.** WACV 2024.
Please cite this paper if using the dataset, and direct any questions regarding the dataset
to [Tim Tarsi](mailto:[email protected])
## Summary
Scientific Openly-Licensed Publications (SciOL) is the largest openly-licensed pre-training corpus for multimodal models in the scientific domain, covering multiple sciences including materials science, physics, and computer science. It consists of over 2.7M scientific scientific publications converted into semi-structured data. SciOL contains over 18 Million figure-caption pairs.
**Note: This repository only contains the figures and captions of SciOL. For the textual data see:** [SciOL-text](https://huggingface.co/datasets/Timbrt/SciOL-text)
## Data Format
We provide the data in the webdataset format, e.g., captions in plain text files and group and compress them together with the images. Each tar file contains 1000 images and captions. Corresponding figures and captions have the same filename (excluding extention). We split the data into a train, test and dev set.
## Citation
If you use our dataset in your work, please cite our paper:
```
@InProceedings{Tarsi_2024_WACV,
author = {Tarsi, Tim and Adel, Heike and Metzen, Jan Hendrik and Zhang, Dan and Finco, Matteo and Friedrich, Annemarie},
title = {SciOL and MuLMS-Img: Introducing a Large-Scale Multimodal Scientific Dataset and Models for Image-Text Tasks in the Scientific Domain},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2024},
pages = {4560-4571}
}
```
## License
The SciOL corpus is released under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license. | Timbrt/SciOL-CI | [
"size_categories:10M<n<100M",
"language:en",
"license:cc-by-4.0",
"region:us"
]
| 2023-11-08T16:02:58+00:00 | {"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10M<n<100M"], "pretty_name": "Scientific Openly-Licensed Publications - Caption Images"} | 2024-01-01T11:28:47+00:00 | []
| [
"en"
]
| TAGS
#size_categories-10M<n<100M #language-English #license-cc-by-4.0 #region-us
|
# Scientific Openly-Licensed Publications
This repository contains companion material for the following publication:
> Tim Tarsi, Heike Adel, Jan Hendrik Metzen, Dan Zhang, Matteo Finco, Annemarie Friedrich. SciOL and MuLMS-Img: Introducing A Large-Scale Multimodal Scientific Dataset and Models for Image-Text Tasks in the Scientific Domain. WACV 2024.
Please cite this paper if using the dataset, and direct any questions regarding the dataset
to Tim Tarsi
## Summary
Scientific Openly-Licensed Publications (SciOL) is the largest openly-licensed pre-training corpus for multimodal models in the scientific domain, covering multiple sciences including materials science, physics, and computer science. It consists of over 2.7M scientific scientific publications converted into semi-structured data. SciOL contains over 18 Million figure-caption pairs.
Note: This repository only contains the figures and captions of SciOL. For the textual data see: SciOL-text
## Data Format
We provide the data in the webdataset format, e.g., captions in plain text files and group and compress them together with the images. Each tar file contains 1000 images and captions. Corresponding figures and captions have the same filename (excluding extention). We split the data into a train, test and dev set.
If you use our dataset in your work, please cite our paper:
## License
The SciOL corpus is released under the CC BY 4.0 license. | [
"# Scientific Openly-Licensed Publications\nThis repository contains companion material for the following publication:\n\n> Tim Tarsi, Heike Adel, Jan Hendrik Metzen, Dan Zhang, Matteo Finco, Annemarie Friedrich. SciOL and MuLMS-Img: Introducing A Large-Scale Multimodal Scientific Dataset and Models for Image-Text Tasks in the Scientific Domain. WACV 2024.\n\nPlease cite this paper if using the dataset, and direct any questions regarding the dataset\nto Tim Tarsi",
"## Summary\nScientific Openly-Licensed Publications (SciOL) is the largest openly-licensed pre-training corpus for multimodal models in the scientific domain, covering multiple sciences including materials science, physics, and computer science. It consists of over 2.7M scientific scientific publications converted into semi-structured data. SciOL contains over 18 Million figure-caption pairs.\n\nNote: This repository only contains the figures and captions of SciOL. For the textual data see: SciOL-text",
"## Data Format\nWe provide the data in the webdataset format, e.g., captions in plain text files and group and compress them together with the images. Each tar file contains 1000 images and captions. Corresponding figures and captions have the same filename (excluding extention). We split the data into a train, test and dev set.\n\n\nIf you use our dataset in your work, please cite our paper:",
"## License\n\nThe SciOL corpus is released under the CC BY 4.0 license."
]
| [
"TAGS\n#size_categories-10M<n<100M #language-English #license-cc-by-4.0 #region-us \n",
"# Scientific Openly-Licensed Publications\nThis repository contains companion material for the following publication:\n\n> Tim Tarsi, Heike Adel, Jan Hendrik Metzen, Dan Zhang, Matteo Finco, Annemarie Friedrich. SciOL and MuLMS-Img: Introducing A Large-Scale Multimodal Scientific Dataset and Models for Image-Text Tasks in the Scientific Domain. WACV 2024.\n\nPlease cite this paper if using the dataset, and direct any questions regarding the dataset\nto Tim Tarsi",
"## Summary\nScientific Openly-Licensed Publications (SciOL) is the largest openly-licensed pre-training corpus for multimodal models in the scientific domain, covering multiple sciences including materials science, physics, and computer science. It consists of over 2.7M scientific scientific publications converted into semi-structured data. SciOL contains over 18 Million figure-caption pairs.\n\nNote: This repository only contains the figures and captions of SciOL. For the textual data see: SciOL-text",
"## Data Format\nWe provide the data in the webdataset format, e.g., captions in plain text files and group and compress them together with the images. Each tar file contains 1000 images and captions. Corresponding figures and captions have the same filename (excluding extention). We split the data into a train, test and dev set.\n\n\nIf you use our dataset in your work, please cite our paper:",
"## License\n\nThe SciOL corpus is released under the CC BY 4.0 license."
]
| [
31,
113,
123,
95,
15
]
| [
"passage: TAGS\n#size_categories-10M<n<100M #language-English #license-cc-by-4.0 #region-us \n# Scientific Openly-Licensed Publications\nThis repository contains companion material for the following publication:\n\n> Tim Tarsi, Heike Adel, Jan Hendrik Metzen, Dan Zhang, Matteo Finco, Annemarie Friedrich. SciOL and MuLMS-Img: Introducing A Large-Scale Multimodal Scientific Dataset and Models for Image-Text Tasks in the Scientific Domain. WACV 2024.\n\nPlease cite this paper if using the dataset, and direct any questions regarding the dataset\nto Tim Tarsi## Summary\nScientific Openly-Licensed Publications (SciOL) is the largest openly-licensed pre-training corpus for multimodal models in the scientific domain, covering multiple sciences including materials science, physics, and computer science. It consists of over 2.7M scientific scientific publications converted into semi-structured data. SciOL contains over 18 Million figure-caption pairs.\n\nNote: This repository only contains the figures and captions of SciOL. For the textual data see: SciOL-text## Data Format\nWe provide the data in the webdataset format, e.g., captions in plain text files and group and compress them together with the images. Each tar file contains 1000 images and captions. Corresponding figures and captions have the same filename (excluding extention). We split the data into a train, test and dev set.\n\n\nIf you use our dataset in your work, please cite our paper:## License\n\nThe SciOL corpus is released under the CC BY 4.0 license."
]
|
9cad65ca3dc0dca1e1d8deef5478006a23863d9e | # Dataset Card for "xlsum_data-wiki_gptextsum_results"
rouge= {'rouge1': 0.25431289948140545, 'rouge2': 0.06235500652708138, 'rougeL': 0.16542968042081027, 'rougeLsum': 0.16542968042081027}
bert= {'precision': 0.6915352174463172, 'recall': 0.7106410252424898, 'f1': 0.7006534646030917} | arthurmluz/xlsum_data-wiki_gptextsum_results | [
"region:us"
]
| 2023-11-08T16:05:20+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}], "splits": [{"name": "validation", "num_bytes": 26768065, "num_examples": 7175}], "download_size": 16308689, "dataset_size": 26768065}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-08T16:16:16+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "xlsum_data-wiki_gptextsum_results"
rouge= {'rouge1': 0.25431289948140545, 'rouge2': 0.06235500652708138, 'rougeL': 0.16542968042081027, 'rougeLsum': 0.16542968042081027}
bert= {'precision': 0.6915352174463172, 'recall': 0.7106410252424898, 'f1': 0.7006534646030917} | [
"# Dataset Card for \"xlsum_data-wiki_gptextsum_results\"\n\nrouge= {'rouge1': 0.25431289948140545, 'rouge2': 0.06235500652708138, 'rougeL': 0.16542968042081027, 'rougeLsum': 0.16542968042081027}\n\nbert= {'precision': 0.6915352174463172, 'recall': 0.7106410252424898, 'f1': 0.7006534646030917}"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"xlsum_data-wiki_gptextsum_results\"\n\nrouge= {'rouge1': 0.25431289948140545, 'rouge2': 0.06235500652708138, 'rougeL': 0.16542968042081027, 'rougeLsum': 0.16542968042081027}\n\nbert= {'precision': 0.6915352174463172, 'recall': 0.7106410252424898, 'f1': 0.7006534646030917}"
]
| [
6,
134
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"xlsum_data-wiki_gptextsum_results\"\n\nrouge= {'rouge1': 0.25431289948140545, 'rouge2': 0.06235500652708138, 'rougeL': 0.16542968042081027, 'rougeLsum': 0.16542968042081027}\n\nbert= {'precision': 0.6915352174463172, 'recall': 0.7106410252424898, 'f1': 0.7006534646030917}"
]
|
c63c91c791a1d7d8058b868ff963d98219952a8a |
## Clustering HAL
This dataset was created by scrapping data from the HAL platform.
Over 80,000 articles have been scrapped to keep their id, title and category.
It was originally used for the French version of [MTEB](https://github.com/embeddings-benchmark/mteb), but it can also be used for various clustering or classification tasks.
### Usage
To use this dataset, you can run the following code :
```py
from datasets import load_dataset
dataset = load_dataset("lyon-nlp/clustering-hal-s2s", "test")
``` | lyon-nlp/clustering-hal-s2s | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:fr",
"license:apache-2.0",
"region:us"
]
| 2023-11-08T16:19:39+00:00 | {"language": ["fr"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"]} | 2023-12-11T16:35:24+00:00 | []
| [
"fr"
]
| TAGS
#task_categories-text-classification #size_categories-10K<n<100K #language-French #license-apache-2.0 #region-us
|
## Clustering HAL
This dataset was created by scrapping data from the HAL platform.
Over 80,000 articles have been scrapped to keep their id, title and category.
It was originally used for the French version of MTEB, but it can also be used for various clustering or classification tasks.
### Usage
To use this dataset, you can run the following code :
| [
"## Clustering HAL\n\nThis dataset was created by scrapping data from the HAL platform. \nOver 80,000 articles have been scrapped to keep their id, title and category. \n\nIt was originally used for the French version of MTEB, but it can also be used for various clustering or classification tasks.",
"### Usage\n\nTo use this dataset, you can run the following code :"
]
| [
"TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-French #license-apache-2.0 #region-us \n",
"## Clustering HAL\n\nThis dataset was created by scrapping data from the HAL platform. \nOver 80,000 articles have been scrapped to keep their id, title and category. \n\nIt was originally used for the French version of MTEB, but it can also be used for various clustering or classification tasks.",
"### Usage\n\nTo use this dataset, you can run the following code :"
]
| [
43,
65,
17
]
| [
"passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-French #license-apache-2.0 #region-us \n## Clustering HAL\n\nThis dataset was created by scrapping data from the HAL platform. \nOver 80,000 articles have been scrapped to keep their id, title and category. \n\nIt was originally used for the French version of MTEB, but it can also be used for various clustering or classification tasks.### Usage\n\nTo use this dataset, you can run the following code :"
]
|
f8561cb1a0bc44bc892e468620d2b45aa9c294b9 | # Dataset Card for "bw_spec_cls_80_13"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_13 | [
"region:us"
]
| 2023-11-08T16:21:06+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "33538", "1": "34003", "2": "34147", "3": "34167", "4": "34257", "5": "34258", "6": "34263", "7": "34484", "8": "34510", "9": "34511", "10": "34994", "11": "34996", "12": "35007", "13": "35296", "14": "35299", "15": "35443", "16": "35444", "17": "35527", "18": "35534", "19": "35535", "20": "35537", "21": "35539", "22": "35541", "23": "35543", "24": "35571", "25": "35734", "26": "36143", "27": "36144", "28": "36145", "29": "36146", "30": "36147", "31": "36272", "32": "36273", "33": "36275", "34": "36277", "35": "36304", "36": "36322", "37": "36333", "38": "36371", "39": "36380", "40": "36388", "41": "36428", "42": "36435", "43": "36481", "44": "36526", "45": "36567", "46": "36614", "47": "36615", "48": "36616", "49": "36618", "50": "36643", "51": "36659", "52": "36983", "53": "36984", "54": "36985", "55": "36986", "56": "36987", "57": "36988", "58": "36990", "59": "36992", "60": "36994", "61": "36997", "62": "36999", "63": "37041", "64": "37111", "65": "37113", "66": "37119", "67": "37121", "68": "37131", "69": "37136", "70": "37141", "71": "37147", "72": "37324", "73": "37325", "74": "37368", "75": "37369", "76": "37416", "77": "37417", "78": "37423", "79": "37538"}}}}], "splits": [{"name": "train", "num_bytes": 86840953.6, "num_examples": 1600}], "download_size": 86643054, "dataset_size": 86840953.6}} | 2023-11-08T16:21:14+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_13"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_13\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_13\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_13\"\n\nMore Information needed"
]
|
392ba3f5bcc8c51f578786c1fc3dae648662cb9b |
This is a re-edit from the Alloprof dataset (which can be found here : https://huggingface.co/datasets/antoinelb7/alloprof).
For more information about the data source and the features, please refer to the original dataset card made by the authors, along with their paper available here : https://arxiv.org/abs/2302.07738
This re-edition of the dataset is a preprocessed version of the original, **in a more ready-to-use format**. Essentially, the texts have been cleaned, and data not usable for retrieval has been discarded.
### Why a re-edition ?
It has been made for easier usage in the MTEB benchmarking pipeline in order to contribute in the MTEB leaderboard : https://huggingface.co/spaces/mteb/leaderboard.
For more information about the project, please refer to the associated paper : https://arxiv.org/pdf/2210.07316.pdf
### Usage
To use the dataset, you need to specify the subset you want (documents or queries) when calling the load_dataset() method.
For example, to get the queries use :
```py
from datasets import load_dataset
dataset = load_dataset("lyon-nlp/alloprof", "queries")
```
| lyon-nlp/alloprof | [
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:fr",
"arxiv:2302.07738",
"arxiv:2210.07316",
"region:us"
]
| 2023-11-08T16:45:33+00:00 | {"language": ["fr"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification", "question-answering"], "pretty_name": "alloprof"} | 2024-01-17T14:33:09+00:00 | [
"2302.07738",
"2210.07316"
]
| [
"fr"
]
| TAGS
#task_categories-text-classification #task_categories-question-answering #size_categories-1K<n<10K #language-French #arxiv-2302.07738 #arxiv-2210.07316 #region-us
|
This is a re-edit from the Alloprof dataset (which can be found here : URL
For more information about the data source and the features, please refer to the original dataset card made by the authors, along with their paper available here : URL
This re-edition of the dataset is a preprocessed version of the original, in a more ready-to-use format. Essentially, the texts have been cleaned, and data not usable for retrieval has been discarded.
### Why a re-edition ?
It has been made for easier usage in the MTEB benchmarking pipeline in order to contribute in the MTEB leaderboard : URL
For more information about the project, please refer to the associated paper : URL
### Usage
To use the dataset, you need to specify the subset you want (documents or queries) when calling the load_dataset() method.
For example, to get the queries use :
| [
"### Why a re-edition ?\n\nIt has been made for easier usage in the MTEB benchmarking pipeline in order to contribute in the MTEB leaderboard : URL\n\nFor more information about the project, please refer to the associated paper : URL",
"### Usage\n\nTo use the dataset, you need to specify the subset you want (documents or queries) when calling the load_dataset() method.\nFor example, to get the queries use :"
]
| [
"TAGS\n#task_categories-text-classification #task_categories-question-answering #size_categories-1K<n<10K #language-French #arxiv-2302.07738 #arxiv-2210.07316 #region-us \n",
"### Why a re-edition ?\n\nIt has been made for easier usage in the MTEB benchmarking pipeline in order to contribute in the MTEB leaderboard : URL\n\nFor more information about the project, please refer to the associated paper : URL",
"### Usage\n\nTo use the dataset, you need to specify the subset you want (documents or queries) when calling the load_dataset() method.\nFor example, to get the queries use :"
]
| [
65,
51,
48
]
| [
"passage: TAGS\n#task_categories-text-classification #task_categories-question-answering #size_categories-1K<n<10K #language-French #arxiv-2302.07738 #arxiv-2210.07316 #region-us \n### Why a re-edition ?\n\nIt has been made for easier usage in the MTEB benchmarking pipeline in order to contribute in the MTEB leaderboard : URL\n\nFor more information about the project, please refer to the associated paper : URL### Usage\n\nTo use the dataset, you need to specify the subset you want (documents or queries) when calling the load_dataset() method.\nFor example, to get the queries use :"
]
|
1712d38d4b27b6b61a84228ab87719b19cda255d | # Dataset Card for "bw_spec_cls_80_14"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_14 | [
"region:us"
]
| 2023-11-08T16:46:01+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "37592", "1": "37725", "2": "37727", "3": "37730", "4": "37731", "5": "37779", "6": "37781", "7": "37784", "8": "37859", "9": "37911", "10": "37920", "11": "38312", "12": "38321", "13": "38323", "14": "38326", "15": "38351", "16": "38352", "17": "38353", "18": "38354", "19": "38361", "20": "38362", "21": "38363", "22": "38365", "23": "38399", "24": "38435", "25": "38450", "26": "38522", "27": "38557", "28": "38560", "29": "38847", "30": "38878", "31": "38955", "32": "38961", "33": "38964", "34": "38965", "35": "38966", "36": "38967", "37": "39188", "38": "39259", "39": "39278", "40": "39291", "41": "39298", "42": "39357", "43": "39359", "44": "39378", "45": "39484", "46": "39488", "47": "39530", "48": "39658", "49": "39659", "50": "39660", "51": "39661", "52": "39662", "53": "39663", "54": "39664", "55": "39665", "56": "39666", "57": "39667", "58": "39875", "59": "39900", "60": "39904", "61": "40121", "62": "40122", "63": "40123", "64": "40133", "65": "40134", "66": "40139", "67": "40141", "68": "40147", "69": "40161", "70": "40180", "71": "40182", "72": "40250", "73": "40509", "74": "40525", "75": "40541", "76": "40542", "77": "40598", "78": "40683", "79": "40725"}}}}], "splits": [{"name": "train", "num_bytes": 88556044.8, "num_examples": 1600}], "download_size": 88996626, "dataset_size": 88556044.8}} | 2023-11-08T16:46:08+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_14"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_14\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_14\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_14\"\n\nMore Information needed"
]
|
70415b63d61a41c63f45a2e20cb685822d676354 | # Dataset Card for "xsum_short"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | determined-ai/xsum_short | [
"region:us"
]
| 2023-11-08T16:52:44+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "document", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1007520.5456002352, "num_examples": 429}], "download_size": 51135, "dataset_size": 1007520.5456002352}} | 2023-11-08T16:52:46+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "xsum_short"
More Information needed | [
"# Dataset Card for \"xsum_short\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"xsum_short\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"xsum_short\"\n\nMore Information needed"
]
|
5f1c257533331b1b8ed1ab0b40997816f6e05ddb | # Dataset Card for "es-0712-no-demoji-m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gg-ai/es-0712-no-demoji-m | [
"region:us"
]
| 2023-11-08T16:54:31+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "clean_text", "dtype": "string"}, {"name": "sent", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5850039, "num_examples": 16256}, {"name": "test", "num_bytes": 1177134, "num_examples": 3252}, {"name": "val", "num_bytes": 297532, "num_examples": 813}], "download_size": 4682068, "dataset_size": 7324705}} | 2023-11-08T16:54:39+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "es-0712-no-demoji-m"
More Information needed | [
"# Dataset Card for \"es-0712-no-demoji-m\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"es-0712-no-demoji-m\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"es-0712-no-demoji-m\"\n\nMore Information needed"
]
|
251a31d4d646536b5c082482c16d7b6bdeedb7e7 | # Dataset Card for "swiss_legislation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | manu/swiss_legislation | [
"region:us"
]
| 2023-11-08T16:57:12+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 276089490, "num_examples": 11197}], "download_size": 114594480, "dataset_size": 276089490}} | 2023-11-08T16:57:39+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "swiss_legislation"
More Information needed | [
"# Dataset Card for \"swiss_legislation\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"swiss_legislation\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"swiss_legislation\"\n\nMore Information needed"
]
|
45dd729a0b9e928536643fe690fd2111961d02e5 |
# Dataset Card for Evaluation run of PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B](https://huggingface.co/PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_PeanutJar__Mistral-v0.1-PeanutButter-v0.0.2-7B_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-08T16:55:51.659477](https://huggingface.co/datasets/open-llm-leaderboard/details_PeanutJar__Mistral-v0.1-PeanutButter-v0.0.2-7B_public/blob/main/results_2023-11-08T16-55-51.659477.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6373187668436219,
"acc_stderr": 0.032198754972879835,
"acc_norm": 0.6466738397796956,
"acc_norm_stderr": 0.03289417683130313,
"mc1": 0.3023255813953488,
"mc1_stderr": 0.016077509266133026,
"mc2": 0.45917059894485873,
"mc2_stderr": 0.014568844776078935,
"em": 0.0053481543624161075,
"em_stderr": 0.0007469252903319645,
"f1": 0.06526845637583892,
"f1_stderr": 0.0014953213199520477
},
"harness|arc:challenge|25": {
"acc": 0.5844709897610921,
"acc_stderr": 0.014401366641216388,
"acc_norm": 0.6177474402730375,
"acc_norm_stderr": 0.014200454049979277
},
"harness|hellaswag|10": {
"acc": 0.6389165504879506,
"acc_stderr": 0.004793330525656208,
"acc_norm": 0.8410675164309899,
"acc_norm_stderr": 0.003648659041493648
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6710526315789473,
"acc_stderr": 0.03823428969926604,
"acc_norm": 0.6710526315789473,
"acc_norm_stderr": 0.03823428969926604
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6943396226415094,
"acc_stderr": 0.028353298073322666,
"acc_norm": 0.6943396226415094,
"acc_norm_stderr": 0.028353298073322666
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7291666666666666,
"acc_stderr": 0.03716177437566017,
"acc_norm": 0.7291666666666666,
"acc_norm_stderr": 0.03716177437566017
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6416184971098265,
"acc_stderr": 0.036563436533531585,
"acc_norm": 0.6416184971098265,
"acc_norm_stderr": 0.036563436533531585
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.04940635630605659,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.04940635630605659
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5659574468085107,
"acc_stderr": 0.03240038086792747,
"acc_norm": 0.5659574468085107,
"acc_norm_stderr": 0.03240038086792747
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3835978835978836,
"acc_stderr": 0.025043757318520196,
"acc_norm": 0.3835978835978836,
"acc_norm_stderr": 0.025043757318520196
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.042857142857142816,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.042857142857142816
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7580645161290323,
"acc_stderr": 0.024362599693031096,
"acc_norm": 0.7580645161290323,
"acc_norm_stderr": 0.024362599693031096
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5369458128078818,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.5369458128078818,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7676767676767676,
"acc_stderr": 0.030088629490217487,
"acc_norm": 0.7676767676767676,
"acc_norm_stderr": 0.030088629490217487
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8756476683937824,
"acc_stderr": 0.023814477086593542,
"acc_norm": 0.8756476683937824,
"acc_norm_stderr": 0.023814477086593542
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6461538461538462,
"acc_stderr": 0.024243783994062157,
"acc_norm": 0.6461538461538462,
"acc_norm_stderr": 0.024243783994062157
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35185185185185186,
"acc_stderr": 0.029116617606083015,
"acc_norm": 0.35185185185185186,
"acc_norm_stderr": 0.029116617606083015
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6302521008403361,
"acc_stderr": 0.03135709599613591,
"acc_norm": 0.6302521008403361,
"acc_norm_stderr": 0.03135709599613591
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242742,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242742
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8220183486238533,
"acc_stderr": 0.016399436366612927,
"acc_norm": 0.8220183486238533,
"acc_norm_stderr": 0.016399436366612927
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5509259259259259,
"acc_stderr": 0.033922384053216174,
"acc_norm": 0.5509259259259259,
"acc_norm_stderr": 0.033922384053216174
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7892156862745098,
"acc_stderr": 0.028626547912437406,
"acc_norm": 0.7892156862745098,
"acc_norm_stderr": 0.028626547912437406
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7679324894514767,
"acc_stderr": 0.02747974455080851,
"acc_norm": 0.7679324894514767,
"acc_norm_stderr": 0.02747974455080851
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.03547771004159463,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.03547771004159463
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.03640118271990947,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.03640118271990947
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7852760736196319,
"acc_stderr": 0.03226219377286775,
"acc_norm": 0.7852760736196319,
"acc_norm_stderr": 0.03226219377286775
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5,
"acc_stderr": 0.04745789978762494,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04745789978762494
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.036756688322331886,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.036756688322331886
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8122605363984674,
"acc_stderr": 0.013964393769899126,
"acc_norm": 0.8122605363984674,
"acc_norm_stderr": 0.013964393769899126
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7167630057803468,
"acc_stderr": 0.024257901705323378,
"acc_norm": 0.7167630057803468,
"acc_norm_stderr": 0.024257901705323378
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3329608938547486,
"acc_stderr": 0.015761716178397563,
"acc_norm": 0.3329608938547486,
"acc_norm_stderr": 0.015761716178397563
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7549019607843137,
"acc_stderr": 0.024630048979824775,
"acc_norm": 0.7549019607843137,
"acc_norm_stderr": 0.024630048979824775
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7138263665594855,
"acc_stderr": 0.025670259242188943,
"acc_norm": 0.7138263665594855,
"acc_norm_stderr": 0.025670259242188943
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.75,
"acc_stderr": 0.02409347123262133,
"acc_norm": 0.75,
"acc_norm_stderr": 0.02409347123262133
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5070921985815603,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.5070921985815603,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4556714471968709,
"acc_stderr": 0.012719949543032207,
"acc_norm": 0.4556714471968709,
"acc_norm_stderr": 0.012719949543032207
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6727941176470589,
"acc_stderr": 0.028501452860396556,
"acc_norm": 0.6727941176470589,
"acc_norm_stderr": 0.028501452860396556
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6830065359477124,
"acc_stderr": 0.018824219512706214,
"acc_norm": 0.6830065359477124,
"acc_norm_stderr": 0.018824219512706214
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.04607582090719976,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.04607582090719976
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8308457711442786,
"acc_stderr": 0.026508590656233264,
"acc_norm": 0.8308457711442786,
"acc_norm_stderr": 0.026508590656233264
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.88,
"acc_stderr": 0.03265986323710906,
"acc_norm": 0.88,
"acc_norm_stderr": 0.03265986323710906
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.038695433234721015,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.038695433234721015
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.029547741687640038,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.029547741687640038
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3023255813953488,
"mc1_stderr": 0.016077509266133026,
"mc2": 0.45917059894485873,
"mc2_stderr": 0.014568844776078935
},
"harness|winogrande|5": {
"acc": 0.7837411207576953,
"acc_stderr": 0.01157061486140935
},
"harness|drop|3": {
"em": 0.0053481543624161075,
"em_stderr": 0.0007469252903319645,
"f1": 0.06526845637583892,
"f1_stderr": 0.0014953213199520477
},
"harness|gsm8k|5": {
"acc": 0.17437452615617893,
"acc_stderr": 0.010451421361976233
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_PeanutJar__Mistral-v0.1-PeanutButter-v0.0.2-7B | [
"region:us"
]
| 2023-11-08T16:58:50+00:00 | {"pretty_name": "Evaluation run of PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B](https://huggingface.co/PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_PeanutJar__Mistral-v0.1-PeanutButter-v0.0.2-7B_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-08T16:55:51.659477](https://huggingface.co/datasets/open-llm-leaderboard/details_PeanutJar__Mistral-v0.1-PeanutButter-v0.0.2-7B_public/blob/main/results_2023-11-08T16-55-51.659477.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6373187668436219,\n \"acc_stderr\": 0.032198754972879835,\n \"acc_norm\": 0.6466738397796956,\n \"acc_norm_stderr\": 0.03289417683130313,\n \"mc1\": 0.3023255813953488,\n \"mc1_stderr\": 0.016077509266133026,\n \"mc2\": 0.45917059894485873,\n \"mc2_stderr\": 0.014568844776078935,\n \"em\": 0.0053481543624161075,\n \"em_stderr\": 0.0007469252903319645,\n \"f1\": 0.06526845637583892,\n \"f1_stderr\": 0.0014953213199520477\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5844709897610921,\n \"acc_stderr\": 0.014401366641216388,\n \"acc_norm\": 0.6177474402730375,\n \"acc_norm_stderr\": 0.014200454049979277\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6389165504879506,\n \"acc_stderr\": 0.004793330525656208,\n \"acc_norm\": 0.8410675164309899,\n \"acc_norm_stderr\": 0.003648659041493648\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.03823428969926604,\n \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.03823428969926604\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322666,\n \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322666\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7291666666666666,\n \"acc_stderr\": 0.03716177437566017,\n \"acc_norm\": 0.7291666666666666,\n \"acc_norm_stderr\": 0.03716177437566017\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6416184971098265,\n \"acc_stderr\": 0.036563436533531585,\n \"acc_norm\": 0.6416184971098265,\n \"acc_norm_stderr\": 0.036563436533531585\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.04940635630605659,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.04940635630605659\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932263\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5659574468085107,\n \"acc_stderr\": 0.03240038086792747,\n \"acc_norm\": 0.5659574468085107,\n \"acc_norm_stderr\": 0.03240038086792747\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3835978835978836,\n \"acc_stderr\": 0.025043757318520196,\n \"acc_norm\": 0.3835978835978836,\n \"acc_norm_stderr\": 0.025043757318520196\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.35714285714285715,\n \"acc_stderr\": 0.042857142857142816,\n \"acc_norm\": 0.35714285714285715,\n \"acc_norm_stderr\": 0.042857142857142816\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7580645161290323,\n \"acc_stderr\": 0.024362599693031096,\n \"acc_norm\": 0.7580645161290323,\n \"acc_norm_stderr\": 0.024362599693031096\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5369458128078818,\n \"acc_stderr\": 0.035083705204426656,\n \"acc_norm\": 0.5369458128078818,\n \"acc_norm_stderr\": 0.035083705204426656\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7676767676767676,\n \"acc_stderr\": 0.030088629490217487,\n \"acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.030088629490217487\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8756476683937824,\n \"acc_stderr\": 0.023814477086593542,\n \"acc_norm\": 0.8756476683937824,\n \"acc_norm_stderr\": 0.023814477086593542\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6461538461538462,\n \"acc_stderr\": 0.024243783994062157,\n \"acc_norm\": 0.6461538461538462,\n \"acc_norm_stderr\": 0.024243783994062157\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.35185185185185186,\n \"acc_stderr\": 0.029116617606083015,\n \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.029116617606083015\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6302521008403361,\n \"acc_stderr\": 0.03135709599613591,\n \"acc_norm\": 0.6302521008403361,\n \"acc_norm_stderr\": 0.03135709599613591\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8220183486238533,\n \"acc_stderr\": 0.016399436366612927,\n \"acc_norm\": 0.8220183486238533,\n \"acc_norm_stderr\": 0.016399436366612927\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5509259259259259,\n \"acc_stderr\": 0.033922384053216174,\n \"acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.033922384053216174\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7892156862745098,\n \"acc_stderr\": 0.028626547912437406,\n \"acc_norm\": 0.7892156862745098,\n \"acc_norm_stderr\": 0.028626547912437406\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7679324894514767,\n \"acc_stderr\": 0.02747974455080851,\n \"acc_norm\": 0.7679324894514767,\n \"acc_norm_stderr\": 0.02747974455080851\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159463,\n \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159463\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990947,\n \"acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990947\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7852760736196319,\n \"acc_stderr\": 0.03226219377286775,\n \"acc_norm\": 0.7852760736196319,\n \"acc_norm_stderr\": 0.03226219377286775\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.036756688322331886,\n \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.036756688322331886\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8122605363984674,\n \"acc_stderr\": 0.013964393769899126,\n \"acc_norm\": 0.8122605363984674,\n \"acc_norm_stderr\": 0.013964393769899126\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7167630057803468,\n \"acc_stderr\": 0.024257901705323378,\n \"acc_norm\": 0.7167630057803468,\n \"acc_norm_stderr\": 0.024257901705323378\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3329608938547486,\n \"acc_stderr\": 0.015761716178397563,\n \"acc_norm\": 0.3329608938547486,\n \"acc_norm_stderr\": 0.015761716178397563\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7549019607843137,\n \"acc_stderr\": 0.024630048979824775,\n \"acc_norm\": 0.7549019607843137,\n \"acc_norm_stderr\": 0.024630048979824775\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n \"acc_stderr\": 0.025670259242188943,\n \"acc_norm\": 0.7138263665594855,\n \"acc_norm_stderr\": 0.025670259242188943\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.02409347123262133,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.02409347123262133\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5070921985815603,\n \"acc_stderr\": 0.02982449855912901,\n \"acc_norm\": 0.5070921985815603,\n \"acc_norm_stderr\": 0.02982449855912901\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4556714471968709,\n \"acc_stderr\": 0.012719949543032207,\n \"acc_norm\": 0.4556714471968709,\n \"acc_norm_stderr\": 0.012719949543032207\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6727941176470589,\n \"acc_stderr\": 0.028501452860396556,\n \"acc_norm\": 0.6727941176470589,\n \"acc_norm_stderr\": 0.028501452860396556\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6830065359477124,\n \"acc_stderr\": 0.018824219512706214,\n \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.018824219512706214\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6363636363636364,\n \"acc_stderr\": 0.04607582090719976,\n \"acc_norm\": 0.6363636363636364,\n \"acc_norm_stderr\": 0.04607582090719976\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n \"acc_stderr\": 0.026508590656233264,\n \"acc_norm\": 0.8308457711442786,\n \"acc_norm_stderr\": 0.026508590656233264\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710906,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710906\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n \"acc_stderr\": 0.038695433234721015,\n \"acc_norm\": 0.5542168674698795,\n \"acc_norm_stderr\": 0.038695433234721015\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3023255813953488,\n \"mc1_stderr\": 0.016077509266133026,\n \"mc2\": 0.45917059894485873,\n \"mc2_stderr\": 0.014568844776078935\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7837411207576953,\n \"acc_stderr\": 0.01157061486140935\n },\n \"harness|drop|3\": {\n \"em\": 0.0053481543624161075,\n \"em_stderr\": 0.0007469252903319645,\n \"f1\": 0.06526845637583892,\n \"f1_stderr\": 0.0014953213199520477\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.17437452615617893,\n \"acc_stderr\": 0.010451421361976233\n }\n}\n```", "repo_url": "https://huggingface.co/PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|arc:challenge|25_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|drop|3_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|gsm8k|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hellaswag|10_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-08T16-55-51.659477.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["**/details_harness|winogrande|5_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-08T16-55-51.659477.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_08T16_55_51.659477", "path": ["results_2023-11-08T16-55-51.659477.parquet"]}, {"split": "latest", "path": ["results_2023-11-08T16-55-51.659477.parquet"]}]}]} | 2023-11-08T16:59:57+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-08T16:55:51.659477(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-08T16:55:51.659477(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-08T16:55:51.659477(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
32,
31,
181,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model PeanutJar/Mistral-v0.1-PeanutButter-v0.0.2-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-08T16:55:51.659477(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
927b28a3a1be508dcfdd6422ef26d77d50e93bd5 | # Dataset Card for "FineTuneDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Lollitor/FineTuneDataset | [
"region:us"
]
| 2023-11-08T17:05:40+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "sequence", "dtype": "string"}, {"name": "label", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 20058390, "num_examples": 17033}, {"name": "validation", "num_bytes": 2192697, "num_examples": 1893}], "download_size": 9789079, "dataset_size": 22251087}} | 2023-11-08T17:05:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "FineTuneDataset"
More Information needed | [
"# Dataset Card for \"FineTuneDataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"FineTuneDataset\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"FineTuneDataset\"\n\nMore Information needed"
]
|
cd25d7e74f5f275ee103c406d96c6828e8a4e714 |
# Dataset Card for Evaluation run of internlm/internlm-20b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/internlm/internlm-20b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [internlm/internlm-20b](https://huggingface.co/internlm/internlm-20b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_internlm__internlm-20b_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-08T17:07:38.003322](https://huggingface.co/datasets/open-llm-leaderboard/details_internlm__internlm-20b_public/blob/main/results_2023-11-08T17-07-38.003322.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.446623322147651,
"em_stderr": 0.005091207245611711,
"f1": 0.4853261325503364,
"f1_stderr": 0.004942779780816972,
"acc": 0.5010965351881582,
"acc_stderr": 0.011778846211925709
},
"harness|drop|3": {
"em": 0.446623322147651,
"em_stderr": 0.005091207245611711,
"f1": 0.4853261325503364,
"f1_stderr": 0.004942779780816972
},
"harness|gsm8k|5": {
"acc": 0.2350265352539803,
"acc_stderr": 0.011679491349994874
},
"harness|winogrande|5": {
"acc": 0.7671665351223362,
"acc_stderr": 0.011878201073856542
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_internlm__internlm-20b | [
"region:us"
]
| 2023-11-08T17:07:53+00:00 | {"pretty_name": "Evaluation run of internlm/internlm-20b", "dataset_summary": "Dataset automatically created during the evaluation run of model [internlm/internlm-20b](https://huggingface.co/internlm/internlm-20b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_internlm__internlm-20b_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-08T17:07:38.003322](https://huggingface.co/datasets/open-llm-leaderboard/details_internlm__internlm-20b_public/blob/main/results_2023-11-08T17-07-38.003322.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.446623322147651,\n \"em_stderr\": 0.005091207245611711,\n \"f1\": 0.4853261325503364,\n \"f1_stderr\": 0.004942779780816972,\n \"acc\": 0.5010965351881582,\n \"acc_stderr\": 0.011778846211925709\n },\n \"harness|drop|3\": {\n \"em\": 0.446623322147651,\n \"em_stderr\": 0.005091207245611711,\n \"f1\": 0.4853261325503364,\n \"f1_stderr\": 0.004942779780816972\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2350265352539803,\n \"acc_stderr\": 0.011679491349994874\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7671665351223362,\n \"acc_stderr\": 0.011878201073856542\n }\n}\n```", "repo_url": "https://huggingface.co/internlm/internlm-20b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_08T17_07_38.003322", "path": ["**/details_harness|drop|3_2023-11-08T17-07-38.003322.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-08T17-07-38.003322.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_08T17_07_38.003322", "path": ["**/details_harness|gsm8k|5_2023-11-08T17-07-38.003322.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-08T17-07-38.003322.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_08T17_07_38.003322", "path": ["**/details_harness|winogrande|5_2023-11-08T17-07-38.003322.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-08T17-07-38.003322.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_08T17_07_38.003322", "path": ["results_2023-11-08T17-07-38.003322.parquet"]}, {"split": "latest", "path": ["results_2023-11-08T17-07-38.003322.parquet"]}]}]} | 2023-11-08T17:08:02+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of internlm/internlm-20b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model internlm/internlm-20b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-08T17:07:38.003322(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of internlm/internlm-20b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model internlm/internlm-20b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-08T17:07:38.003322(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of internlm/internlm-20b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model internlm/internlm-20b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-08T17:07:38.003322(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
17,
31,
166,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of internlm/internlm-20b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model internlm/internlm-20b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-08T17:07:38.003322(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
09946bfd5643610bf771e93d47b94f57d38f2a30 |
# Dataset Card for Evaluation run of internlm/internlm-20b-chat
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/internlm/internlm-20b-chat
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [internlm/internlm-20b-chat](https://huggingface.co/internlm/internlm-20b-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_internlm__internlm-20b-chat_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-08T17:10:14.815999](https://huggingface.co/datasets/open-llm-leaderboard/details_internlm__internlm-20b-chat_public/blob/main/results_2023-11-08T17-10-14.815999.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.016149328859060404,
"em_stderr": 0.001290866955681033,
"f1": 0.10652579697986504,
"f1_stderr": 0.0024253657455140664,
"acc": 0.4874752643801689,
"acc_stderr": 0.011119649443380293
},
"harness|drop|3": {
"em": 0.016149328859060404,
"em_stderr": 0.001290866955681033,
"f1": 0.10652579697986504,
"f1_stderr": 0.0024253657455140664
},
"harness|gsm8k|5": {
"acc": 0.18726307808946172,
"acc_stderr": 0.01074591419951081
},
"harness|winogrande|5": {
"acc": 0.7876874506708761,
"acc_stderr": 0.011493384687249779
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_internlm__internlm-20b-chat | [
"region:us"
]
| 2023-11-08T17:10:30+00:00 | {"pretty_name": "Evaluation run of internlm/internlm-20b-chat", "dataset_summary": "Dataset automatically created during the evaluation run of model [internlm/internlm-20b-chat](https://huggingface.co/internlm/internlm-20b-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_internlm__internlm-20b-chat_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-08T17:10:14.815999](https://huggingface.co/datasets/open-llm-leaderboard/details_internlm__internlm-20b-chat_public/blob/main/results_2023-11-08T17-10-14.815999.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.016149328859060404,\n \"em_stderr\": 0.001290866955681033,\n \"f1\": 0.10652579697986504,\n \"f1_stderr\": 0.0024253657455140664,\n \"acc\": 0.4874752643801689,\n \"acc_stderr\": 0.011119649443380293\n },\n \"harness|drop|3\": {\n \"em\": 0.016149328859060404,\n \"em_stderr\": 0.001290866955681033,\n \"f1\": 0.10652579697986504,\n \"f1_stderr\": 0.0024253657455140664\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.18726307808946172,\n \"acc_stderr\": 0.01074591419951081\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7876874506708761,\n \"acc_stderr\": 0.011493384687249779\n }\n}\n```", "repo_url": "https://huggingface.co/internlm/internlm-20b-chat", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_08T17_10_14.815999", "path": ["**/details_harness|drop|3_2023-11-08T17-10-14.815999.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-08T17-10-14.815999.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_08T17_10_14.815999", "path": ["**/details_harness|gsm8k|5_2023-11-08T17-10-14.815999.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-08T17-10-14.815999.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_08T17_10_14.815999", "path": ["**/details_harness|winogrande|5_2023-11-08T17-10-14.815999.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-08T17-10-14.815999.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_08T17_10_14.815999", "path": ["results_2023-11-08T17-10-14.815999.parquet"]}, {"split": "latest", "path": ["results_2023-11-08T17-10-14.815999.parquet"]}]}]} | 2023-11-08T17:10:39+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of internlm/internlm-20b-chat
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model internlm/internlm-20b-chat on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-08T17:10:14.815999(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of internlm/internlm-20b-chat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model internlm/internlm-20b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-08T17:10:14.815999(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of internlm/internlm-20b-chat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model internlm/internlm-20b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-08T17:10:14.815999(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
19,
31,
168,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of internlm/internlm-20b-chat## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model internlm/internlm-20b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-08T17:10:14.815999(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
ddd813de77a0196e0875b21aeea58a4a5ee66750 | # Dataset Card for "bw_spec_cls_80_15"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_15 | [
"region:us"
]
| 2023-11-08T17:10:46+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "40842", "1": "40843", "2": "40851", "3": "40903", "4": "40908", "5": "40909", "6": "40938", "7": "40940", "8": "40985", "9": "40986", "10": "41018", "11": "41054", "12": "41095", "13": "41147", "14": "41191", "15": "41192", "16": "41568", "17": "41570", "18": "41573", "19": "41605", "20": "41709", "21": "41714", "22": "41812", "23": "41961", "24": "41962", "25": "41965", "26": "41983", "27": "42014", "28": "42016", "29": "42017", "30": "42019", "31": "42020", "32": "42023", "33": "42025", "34": "42029", "35": "42030", "36": "42031", "37": "42044", "38": "42045", "39": "42046", "40": "42048", "41": "42119", "42": "42126", "43": "42234", "44": "42235", "45": "42236", "46": "42238", "47": "42240", "48": "42241", "49": "42243", "50": "42245", "51": "42247", "52": "42310", "53": "42442", "54": "42475", "55": "42648", "56": "42659", "57": "42751", "58": "42761", "59": "42789", "60": "42966", "61": "42984", "62": "43016", "63": "43018", "64": "43019", "65": "43020", "66": "43021", "67": "43022", "68": "43023", "69": "43024", "70": "43025", "71": "43026", "72": "43027", "73": "43028", "74": "43029", "75": "43030", "76": "43063", "77": "43172", "78": "43173", "79": "43533"}}}}], "splits": [{"name": "train", "num_bytes": 87445438.4, "num_examples": 1600}], "download_size": 87152132, "dataset_size": 87445438.4}} | 2023-11-08T17:10:52+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_15"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_15\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_15\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_15\"\n\nMore Information needed"
]
|
5258a7deca594b33d212e021edc445f307a1c8e3 |
# Dataset Card for Evaluation run of Weyaxi/TekniumAiroboros-Nebula-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/TekniumAiroboros-Nebula-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/TekniumAiroboros-Nebula-7B](https://huggingface.co/Weyaxi/TekniumAiroboros-Nebula-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__TekniumAiroboros-Nebula-7B_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-08T17:19:18.874101](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__TekniumAiroboros-Nebula-7B_public/blob/main/results_2023-11-08T17-19-18.874101.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5486392579762343,
"acc_stderr": 0.03374349666110138,
"acc_norm": 0.5573176954338681,
"acc_norm_stderr": 0.03452133425560909,
"mc1": 0.34761321909424725,
"mc1_stderr": 0.016670769188897303,
"mc2": 0.5163563481502564,
"mc2_stderr": 0.015552871980185646,
"em": 0.34721057046979864,
"em_stderr": 0.004875539476141144,
"f1": 0.4132676174496653,
"f1_stderr": 0.004730745670924258
},
"harness|arc:challenge|25": {
"acc": 0.5264505119453925,
"acc_stderr": 0.01459093135812017,
"acc_norm": 0.5716723549488054,
"acc_norm_stderr": 0.014460496367599019
},
"harness|hellaswag|10": {
"acc": 0.6211909978092014,
"acc_stderr": 0.004840990593494692,
"acc_norm": 0.817167894841665,
"acc_norm_stderr": 0.0038573886135331004
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5259259259259259,
"acc_stderr": 0.04313531696750575,
"acc_norm": 0.5259259259259259,
"acc_norm_stderr": 0.04313531696750575
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5131578947368421,
"acc_stderr": 0.04067533136309174,
"acc_norm": 0.5131578947368421,
"acc_norm_stderr": 0.04067533136309174
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6075471698113207,
"acc_stderr": 0.030052580579557845,
"acc_norm": 0.6075471698113207,
"acc_norm_stderr": 0.030052580579557845
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6527777777777778,
"acc_stderr": 0.039812405437178615,
"acc_norm": 0.6527777777777778,
"acc_norm_stderr": 0.039812405437178615
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.34,
"acc_stderr": 0.047609522856952344,
"acc_norm": 0.34,
"acc_norm_stderr": 0.047609522856952344
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5202312138728323,
"acc_stderr": 0.03809342081273958,
"acc_norm": 0.5202312138728323,
"acc_norm_stderr": 0.03809342081273958
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.04389869956808777,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.04389869956808777
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5148936170212766,
"acc_stderr": 0.032671518489247764,
"acc_norm": 0.5148936170212766,
"acc_norm_stderr": 0.032671518489247764
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4473684210526316,
"acc_stderr": 0.04677473004491199,
"acc_norm": 0.4473684210526316,
"acc_norm_stderr": 0.04677473004491199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4896551724137931,
"acc_stderr": 0.04165774775728763,
"acc_norm": 0.4896551724137931,
"acc_norm_stderr": 0.04165774775728763
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.36772486772486773,
"acc_stderr": 0.024833839825562427,
"acc_norm": 0.36772486772486773,
"acc_norm_stderr": 0.024833839825562427
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.04390259265377563,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.04390259265377563
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6548387096774193,
"acc_stderr": 0.02704574657353433,
"acc_norm": 0.6548387096774193,
"acc_norm_stderr": 0.02704574657353433
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3793103448275862,
"acc_stderr": 0.034139638059062345,
"acc_norm": 0.3793103448275862,
"acc_norm_stderr": 0.034139638059062345
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7212121212121212,
"acc_stderr": 0.03501438706296781,
"acc_norm": 0.7212121212121212,
"acc_norm_stderr": 0.03501438706296781
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.03173071239071724,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.03173071239071724
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7823834196891192,
"acc_stderr": 0.029778663037752954,
"acc_norm": 0.7823834196891192,
"acc_norm_stderr": 0.029778663037752954
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5333333333333333,
"acc_stderr": 0.02529460802398647,
"acc_norm": 0.5333333333333333,
"acc_norm_stderr": 0.02529460802398647
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2814814814814815,
"acc_stderr": 0.02742001935094527,
"acc_norm": 0.2814814814814815,
"acc_norm_stderr": 0.02742001935094527
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5588235294117647,
"acc_stderr": 0.032252942323996406,
"acc_norm": 0.5588235294117647,
"acc_norm_stderr": 0.032252942323996406
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.03861557546255169,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.03861557546255169
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7688073394495413,
"acc_stderr": 0.018075750241633146,
"acc_norm": 0.7688073394495413,
"acc_norm_stderr": 0.018075750241633146
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.33796296296296297,
"acc_stderr": 0.03225941352631295,
"acc_norm": 0.33796296296296297,
"acc_norm_stderr": 0.03225941352631295
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7401960784313726,
"acc_stderr": 0.030778554678693257,
"acc_norm": 0.7401960784313726,
"acc_norm_stderr": 0.030778554678693257
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7088607594936709,
"acc_stderr": 0.02957160106575337,
"acc_norm": 0.7088607594936709,
"acc_norm_stderr": 0.02957160106575337
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6367713004484304,
"acc_stderr": 0.032277904428505,
"acc_norm": 0.6367713004484304,
"acc_norm_stderr": 0.032277904428505
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6259541984732825,
"acc_stderr": 0.042438692422305246,
"acc_norm": 0.6259541984732825,
"acc_norm_stderr": 0.042438692422305246
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.71900826446281,
"acc_stderr": 0.04103203830514512,
"acc_norm": 0.71900826446281,
"acc_norm_stderr": 0.04103203830514512
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6759259259259259,
"acc_stderr": 0.045245960070300476,
"acc_norm": 0.6759259259259259,
"acc_norm_stderr": 0.045245960070300476
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6809815950920245,
"acc_stderr": 0.03661997551073836,
"acc_norm": 0.6809815950920245,
"acc_norm_stderr": 0.03661997551073836
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4375,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.7087378640776699,
"acc_stderr": 0.044986763205729224,
"acc_norm": 0.7087378640776699,
"acc_norm_stderr": 0.044986763205729224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8589743589743589,
"acc_stderr": 0.022801382534597524,
"acc_norm": 0.8589743589743589,
"acc_norm_stderr": 0.022801382534597524
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.63,
"acc_stderr": 0.048523658709390974,
"acc_norm": 0.63,
"acc_norm_stderr": 0.048523658709390974
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.756066411238825,
"acc_stderr": 0.015357212665829465,
"acc_norm": 0.756066411238825,
"acc_norm_stderr": 0.015357212665829465
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5867052023121387,
"acc_stderr": 0.026511261369409247,
"acc_norm": 0.5867052023121387,
"acc_norm_stderr": 0.026511261369409247
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3340782122905028,
"acc_stderr": 0.01577491142238163,
"acc_norm": 0.3340782122905028,
"acc_norm_stderr": 0.01577491142238163
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6045751633986928,
"acc_stderr": 0.027996723180631435,
"acc_norm": 0.6045751633986928,
"acc_norm_stderr": 0.027996723180631435
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6237942122186495,
"acc_stderr": 0.027513925683549434,
"acc_norm": 0.6237942122186495,
"acc_norm_stderr": 0.027513925683549434
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6388888888888888,
"acc_stderr": 0.026725868809100797,
"acc_norm": 0.6388888888888888,
"acc_norm_stderr": 0.026725868809100797
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.39361702127659576,
"acc_stderr": 0.029144544781596154,
"acc_norm": 0.39361702127659576,
"acc_norm_stderr": 0.029144544781596154
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.43415906127770537,
"acc_stderr": 0.01265903323706725,
"acc_norm": 0.43415906127770537,
"acc_norm_stderr": 0.01265903323706725
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.49264705882352944,
"acc_stderr": 0.030369552523902173,
"acc_norm": 0.49264705882352944,
"acc_norm_stderr": 0.030369552523902173
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.565359477124183,
"acc_stderr": 0.02005426920072646,
"acc_norm": 0.565359477124183,
"acc_norm_stderr": 0.02005426920072646
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6272727272727273,
"acc_stderr": 0.04631381319425465,
"acc_norm": 0.6272727272727273,
"acc_norm_stderr": 0.04631381319425465
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5183673469387755,
"acc_stderr": 0.03198761546763127,
"acc_norm": 0.5183673469387755,
"acc_norm_stderr": 0.03198761546763127
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7661691542288557,
"acc_stderr": 0.029929415408348384,
"acc_norm": 0.7661691542288557,
"acc_norm_stderr": 0.029929415408348384
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4578313253012048,
"acc_stderr": 0.0387862677100236,
"acc_norm": 0.4578313253012048,
"acc_norm_stderr": 0.0387862677100236
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7485380116959064,
"acc_stderr": 0.033275044238468436,
"acc_norm": 0.7485380116959064,
"acc_norm_stderr": 0.033275044238468436
},
"harness|truthfulqa:mc|0": {
"mc1": 0.34761321909424725,
"mc1_stderr": 0.016670769188897303,
"mc2": 0.5163563481502564,
"mc2_stderr": 0.015552871980185646
},
"harness|winogrande|5": {
"acc": 0.7324388318863457,
"acc_stderr": 0.012441718456893009
},
"harness|drop|3": {
"em": 0.34721057046979864,
"em_stderr": 0.004875539476141144,
"f1": 0.4132676174496653,
"f1_stderr": 0.004730745670924258
},
"harness|gsm8k|5": {
"acc": 0.09401061410159212,
"acc_stderr": 0.008038819818872476
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Weyaxi__TekniumAiroboros-Nebula-7B | [
"region:us"
]
| 2023-11-08T17:22:18+00:00 | {"pretty_name": "Evaluation run of Weyaxi/TekniumAiroboros-Nebula-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/TekniumAiroboros-Nebula-7B](https://huggingface.co/Weyaxi/TekniumAiroboros-Nebula-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__TekniumAiroboros-Nebula-7B_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-08T17:19:18.874101](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__TekniumAiroboros-Nebula-7B_public/blob/main/results_2023-11-08T17-19-18.874101.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5486392579762343,\n \"acc_stderr\": 0.03374349666110138,\n \"acc_norm\": 0.5573176954338681,\n \"acc_norm_stderr\": 0.03452133425560909,\n \"mc1\": 0.34761321909424725,\n \"mc1_stderr\": 0.016670769188897303,\n \"mc2\": 0.5163563481502564,\n \"mc2_stderr\": 0.015552871980185646,\n \"em\": 0.34721057046979864,\n \"em_stderr\": 0.004875539476141144,\n \"f1\": 0.4132676174496653,\n \"f1_stderr\": 0.004730745670924258\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5264505119453925,\n \"acc_stderr\": 0.01459093135812017,\n \"acc_norm\": 0.5716723549488054,\n \"acc_norm_stderr\": 0.014460496367599019\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6211909978092014,\n \"acc_stderr\": 0.004840990593494692,\n \"acc_norm\": 0.817167894841665,\n \"acc_norm_stderr\": 0.0038573886135331004\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5259259259259259,\n \"acc_stderr\": 0.04313531696750575,\n \"acc_norm\": 0.5259259259259259,\n \"acc_norm_stderr\": 0.04313531696750575\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5131578947368421,\n \"acc_stderr\": 0.04067533136309174,\n \"acc_norm\": 0.5131578947368421,\n \"acc_norm_stderr\": 0.04067533136309174\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6075471698113207,\n \"acc_stderr\": 0.030052580579557845,\n \"acc_norm\": 0.6075471698113207,\n \"acc_norm_stderr\": 0.030052580579557845\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6527777777777778,\n \"acc_stderr\": 0.039812405437178615,\n \"acc_norm\": 0.6527777777777778,\n \"acc_norm_stderr\": 0.039812405437178615\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952344,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952344\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5202312138728323,\n \"acc_stderr\": 0.03809342081273958,\n \"acc_norm\": 0.5202312138728323,\n \"acc_norm_stderr\": 0.03809342081273958\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.04389869956808777,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.04389869956808777\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5148936170212766,\n \"acc_stderr\": 0.032671518489247764,\n \"acc_norm\": 0.5148936170212766,\n \"acc_norm_stderr\": 0.032671518489247764\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.4896551724137931,\n \"acc_stderr\": 0.04165774775728763,\n \"acc_norm\": 0.4896551724137931,\n \"acc_norm_stderr\": 0.04165774775728763\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.36772486772486773,\n \"acc_stderr\": 0.024833839825562427,\n \"acc_norm\": 0.36772486772486773,\n \"acc_norm_stderr\": 0.024833839825562427\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.04390259265377563,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.04390259265377563\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6548387096774193,\n \"acc_stderr\": 0.02704574657353433,\n \"acc_norm\": 0.6548387096774193,\n \"acc_norm_stderr\": 0.02704574657353433\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.3793103448275862,\n \"acc_stderr\": 0.034139638059062345,\n \"acc_norm\": 0.3793103448275862,\n \"acc_norm_stderr\": 0.034139638059062345\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7212121212121212,\n \"acc_stderr\": 0.03501438706296781,\n \"acc_norm\": 0.7212121212121212,\n \"acc_norm_stderr\": 0.03501438706296781\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.03173071239071724,\n \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.03173071239071724\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7823834196891192,\n \"acc_stderr\": 0.029778663037752954,\n \"acc_norm\": 0.7823834196891192,\n \"acc_norm_stderr\": 0.029778663037752954\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5333333333333333,\n \"acc_stderr\": 0.02529460802398647,\n \"acc_norm\": 0.5333333333333333,\n \"acc_norm_stderr\": 0.02529460802398647\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2814814814814815,\n \"acc_stderr\": 0.02742001935094527,\n \"acc_norm\": 0.2814814814814815,\n \"acc_norm_stderr\": 0.02742001935094527\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5588235294117647,\n \"acc_stderr\": 0.032252942323996406,\n \"acc_norm\": 0.5588235294117647,\n \"acc_norm_stderr\": 0.032252942323996406\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.03861557546255169,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.03861557546255169\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7688073394495413,\n \"acc_stderr\": 0.018075750241633146,\n \"acc_norm\": 0.7688073394495413,\n \"acc_norm_stderr\": 0.018075750241633146\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.33796296296296297,\n \"acc_stderr\": 0.03225941352631295,\n \"acc_norm\": 0.33796296296296297,\n \"acc_norm_stderr\": 0.03225941352631295\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7401960784313726,\n \"acc_stderr\": 0.030778554678693257,\n \"acc_norm\": 0.7401960784313726,\n \"acc_norm_stderr\": 0.030778554678693257\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7088607594936709,\n \"acc_stderr\": 0.02957160106575337,\n \"acc_norm\": 0.7088607594936709,\n \"acc_norm_stderr\": 0.02957160106575337\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6367713004484304,\n \"acc_stderr\": 0.032277904428505,\n \"acc_norm\": 0.6367713004484304,\n \"acc_norm_stderr\": 0.032277904428505\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6259541984732825,\n \"acc_stderr\": 0.042438692422305246,\n \"acc_norm\": 0.6259541984732825,\n \"acc_norm_stderr\": 0.042438692422305246\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.71900826446281,\n \"acc_stderr\": 0.04103203830514512,\n \"acc_norm\": 0.71900826446281,\n \"acc_norm_stderr\": 0.04103203830514512\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6759259259259259,\n \"acc_stderr\": 0.045245960070300476,\n \"acc_norm\": 0.6759259259259259,\n \"acc_norm_stderr\": 0.045245960070300476\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6809815950920245,\n \"acc_stderr\": 0.03661997551073836,\n \"acc_norm\": 0.6809815950920245,\n \"acc_norm_stderr\": 0.03661997551073836\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7087378640776699,\n \"acc_stderr\": 0.044986763205729224,\n \"acc_norm\": 0.7087378640776699,\n \"acc_norm_stderr\": 0.044986763205729224\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n \"acc_stderr\": 0.022801382534597524,\n \"acc_norm\": 0.8589743589743589,\n \"acc_norm_stderr\": 0.022801382534597524\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.048523658709390974,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.048523658709390974\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.756066411238825,\n \"acc_stderr\": 0.015357212665829465,\n \"acc_norm\": 0.756066411238825,\n \"acc_norm_stderr\": 0.015357212665829465\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5867052023121387,\n \"acc_stderr\": 0.026511261369409247,\n \"acc_norm\": 0.5867052023121387,\n \"acc_norm_stderr\": 0.026511261369409247\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3340782122905028,\n \"acc_stderr\": 0.01577491142238163,\n \"acc_norm\": 0.3340782122905028,\n \"acc_norm_stderr\": 0.01577491142238163\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6045751633986928,\n \"acc_stderr\": 0.027996723180631435,\n \"acc_norm\": 0.6045751633986928,\n \"acc_norm_stderr\": 0.027996723180631435\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6237942122186495,\n \"acc_stderr\": 0.027513925683549434,\n \"acc_norm\": 0.6237942122186495,\n \"acc_norm_stderr\": 0.027513925683549434\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6388888888888888,\n \"acc_stderr\": 0.026725868809100797,\n \"acc_norm\": 0.6388888888888888,\n \"acc_norm_stderr\": 0.026725868809100797\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.39361702127659576,\n \"acc_stderr\": 0.029144544781596154,\n \"acc_norm\": 0.39361702127659576,\n \"acc_norm_stderr\": 0.029144544781596154\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.43415906127770537,\n \"acc_stderr\": 0.01265903323706725,\n \"acc_norm\": 0.43415906127770537,\n \"acc_norm_stderr\": 0.01265903323706725\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.49264705882352944,\n \"acc_stderr\": 0.030369552523902173,\n \"acc_norm\": 0.49264705882352944,\n \"acc_norm_stderr\": 0.030369552523902173\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.565359477124183,\n \"acc_stderr\": 0.02005426920072646,\n \"acc_norm\": 0.565359477124183,\n \"acc_norm_stderr\": 0.02005426920072646\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6272727272727273,\n \"acc_stderr\": 0.04631381319425465,\n \"acc_norm\": 0.6272727272727273,\n \"acc_norm_stderr\": 0.04631381319425465\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5183673469387755,\n \"acc_stderr\": 0.03198761546763127,\n \"acc_norm\": 0.5183673469387755,\n \"acc_norm_stderr\": 0.03198761546763127\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7661691542288557,\n \"acc_stderr\": 0.029929415408348384,\n \"acc_norm\": 0.7661691542288557,\n \"acc_norm_stderr\": 0.029929415408348384\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4578313253012048,\n \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.4578313253012048,\n \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7485380116959064,\n \"acc_stderr\": 0.033275044238468436,\n \"acc_norm\": 0.7485380116959064,\n \"acc_norm_stderr\": 0.033275044238468436\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.34761321909424725,\n \"mc1_stderr\": 0.016670769188897303,\n \"mc2\": 0.5163563481502564,\n \"mc2_stderr\": 0.015552871980185646\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7324388318863457,\n \"acc_stderr\": 0.012441718456893009\n },\n \"harness|drop|3\": {\n \"em\": 0.34721057046979864,\n \"em_stderr\": 0.004875539476141144,\n \"f1\": 0.4132676174496653,\n \"f1_stderr\": 0.004730745670924258\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09401061410159212,\n \"acc_stderr\": 0.008038819818872476\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/TekniumAiroboros-Nebula-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|arc:challenge|25_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|drop|3_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|gsm8k|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hellaswag|10_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-08T17-19-18.874101.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["**/details_harness|winogrande|5_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-08T17-19-18.874101.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_08T17_19_18.874101", "path": ["results_2023-11-08T17-19-18.874101.parquet"]}, {"split": "latest", "path": ["results_2023-11-08T17-19-18.874101.parquet"]}]}]} | 2023-11-08T17:23:24+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of Weyaxi/TekniumAiroboros-Nebula-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Weyaxi/TekniumAiroboros-Nebula-7B on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-08T17:19:18.874101(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Weyaxi/TekniumAiroboros-Nebula-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/TekniumAiroboros-Nebula-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-08T17:19:18.874101(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Weyaxi/TekniumAiroboros-Nebula-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/TekniumAiroboros-Nebula-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-08T17:19:18.874101(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
23,
31,
172,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Weyaxi/TekniumAiroboros-Nebula-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/TekniumAiroboros-Nebula-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-08T17:19:18.874101(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
91cc057f53d0aad56f74ef928e70381688065c21 | # Dataset Card for "cnn_dailymail_short"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | determined-ai/cnn_dailymail_short | [
"region:us"
]
| 2023-11-08T17:24:12+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "article", "dtype": "string"}, {"name": "highlights", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5809463.987342962, "num_examples": 1322}, {"name": "validation", "num_bytes": 215935.11370436865, "num_examples": 50}, {"name": "test", "num_bytes": 204221.88024369016, "num_examples": 47}], "download_size": 815886, "dataset_size": 6229620.98129102}} | 2023-11-08T17:24:15+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "cnn_dailymail_short"
More Information needed | [
"# Dataset Card for \"cnn_dailymail_short\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"cnn_dailymail_short\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"cnn_dailymail_short\"\n\nMore Information needed"
]
|
54de00901694b33a836069d6bcb90817416b5592 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | MruganKulkarni/restaurant_conversation | [
"license:mit",
"region:us"
]
| 2023-11-08T17:28:44+00:00 | {"license": "mit"} | 2023-11-08T19:34:35+00:00 | []
| []
| TAGS
#license-mit #region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#license-mit #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
11,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#license-mit #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
53f74c7dfa009767d26fc6162ded135388353129 | # Dataset Card for "llm-detect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arincon/llm-detect | [
"region:us"
]
| 2023-11-08T17:32:57+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "01", "path": "data/01-*"}, {"split": "02", "path": "data/02-*"}, {"split": "03", "path": "data/03-*"}, {"split": "04", "path": "data/04-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "source", "dtype": "string"}, {"name": "fold", "dtype": "int64"}, {"name": "essay_id", "dtype": "string"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "01", "num_bytes": 74744152, "num_examples": 33259}, {"name": "02", "num_bytes": 94518857, "num_examples": 39785}, {"name": "03", "num_bytes": 99824812, "num_examples": 42206}, {"name": "04", "num_bytes": 104524272, "num_examples": 44206}], "download_size": 197583354, "dataset_size": 373612093}} | 2023-11-08T17:33:10+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "llm-detect"
More Information needed | [
"# Dataset Card for \"llm-detect\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"llm-detect\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"llm-detect\"\n\nMore Information needed"
]
|
d7820d5712d2d7df7d03b06b439c826070261895 | # Dataset Card for "bw_spec_cls_80_16"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_16 | [
"region:us"
]
| 2023-11-08T17:35:32+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "43534", "1": "43535", "2": "43536", "3": "43585", "4": "43586", "5": "43587", "6": "43588", "7": "43589", "8": "43590", "9": "43592", "10": "43593", "11": "43594", "12": "43595", "13": "43596", "14": "43598", "15": "43599", "16": "43600", "17": "43608", "18": "43621", "19": "43623", "20": "43695", "21": "43696", "22": "43697", "23": "43698", "24": "43699", "25": "43761", "26": "43773", "27": "43796", "28": "43842", "29": "43843", "30": "43844", "31": "43856", "32": "43857", "33": "43858", "34": "43860", "35": "43861", "36": "43863", "37": "43865", "38": "43866", "39": "43867", "40": "43868", "41": "43869", "42": "43883", "43": "43886", "44": "43899", "45": "43911", "46": "43962", "47": "43965", "48": "44092", "49": "44110", "50": "44169", "51": "44236", "52": "44342", "53": "44347", "54": "44354", "55": "44778", "56": "44779", "57": "44780", "58": "44781", "59": "44782", "60": "44791", "61": "44792", "62": "44793", "63": "44794", "64": "44795", "65": "44796", "66": "44797", "67": "44798", "68": "44799", "69": "44801", "70": "44803", "71": "44804", "72": "44805", "73": "44806", "74": "44809", "75": "44820", "76": "44821", "77": "44822", "78": "44823", "79": "44848"}}}}], "splits": [{"name": "train", "num_bytes": 90417910.4, "num_examples": 1600}], "download_size": 89917143, "dataset_size": 90417910.4}} | 2023-11-08T17:35:40+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_16"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_16\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_16\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_16\"\n\nMore Information needed"
]
|
68b61ba46d8f6af5cfbec992bf619c12b86c82c3 | # Dataset Card for "wikilingua_data-wiki_gptextsum_results"
rouge= {'rouge1': 0.33197745689950786, 'rouge2': 0.12351364096382854, 'rougeL': 0.24922924859888235, 'rougeLsum': 0.24922924859888235}
bert= {'precision': 0.761901833262096, 'recall': 0.7622503304255206, 'f1': 0.7612595360336023} | arthurmluz/wikilingua_data-wiki_gptextsum_results | [
"region:us"
]
| 2023-11-08T17:49:20+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}], "splits": [{"name": "validation", "num_bytes": 22119069, "num_examples": 8165}], "download_size": 13030706, "dataset_size": 22119069}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-08T17:51:07+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "wikilingua_data-wiki_gptextsum_results"
rouge= {'rouge1': 0.33197745689950786, 'rouge2': 0.12351364096382854, 'rougeL': 0.24922924859888235, 'rougeLsum': 0.24922924859888235}
bert= {'precision': 0.761901833262096, 'recall': 0.7622503304255206, 'f1': 0.7612595360336023} | [
"# Dataset Card for \"wikilingua_data-wiki_gptextsum_results\"\n\nrouge= {'rouge1': 0.33197745689950786, 'rouge2': 0.12351364096382854, 'rougeL': 0.24922924859888235, 'rougeLsum': 0.24922924859888235}\n\nbert= {'precision': 0.761901833262096, 'recall': 0.7622503304255206, 'f1': 0.7612595360336023}"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"wikilingua_data-wiki_gptextsum_results\"\n\nrouge= {'rouge1': 0.33197745689950786, 'rouge2': 0.12351364096382854, 'rougeL': 0.24922924859888235, 'rougeLsum': 0.24922924859888235}\n\nbert= {'precision': 0.761901833262096, 'recall': 0.7622503304255206, 'f1': 0.7612595360336023}"
]
| [
6,
131
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"wikilingua_data-wiki_gptextsum_results\"\n\nrouge= {'rouge1': 0.33197745689950786, 'rouge2': 0.12351364096382854, 'rougeL': 0.24922924859888235, 'rougeLsum': 0.24922924859888235}\n\nbert= {'precision': 0.761901833262096, 'recall': 0.7622503304255206, 'f1': 0.7612595360336023}"
]
|
fffebe483316c8d9382bb0ac3c7ef4fae40920ef | # Dataset Card for "GPTextSum_data-wiki_gptextsum_results"
rouge= {'rouge1': 0.47135937656590865, 'rouge2': 0.25940674061381325, 'rougeL': 0.38504581702889945, 'rougeLsum': 0.38504581702889945}
bert= {'precision': 0.788231149315834, 'recall': 0.7805202662944793, 'f1': 0.7840157747268677} | arthurmluz/GPTextSum_data-wiki_gptextsum_results | [
"region:us"
]
| 2023-11-08T17:53:27+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}], "splits": [{"name": "validation", "num_bytes": 28733, "num_examples": 20}], "download_size": 35161, "dataset_size": 28733}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-08T17:53:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "GPTextSum_data-wiki_gptextsum_results"
rouge= {'rouge1': 0.47135937656590865, 'rouge2': 0.25940674061381325, 'rougeL': 0.38504581702889945, 'rougeLsum': 0.38504581702889945}
bert= {'precision': 0.788231149315834, 'recall': 0.7805202662944793, 'f1': 0.7840157747268677} | [
"# Dataset Card for \"GPTextSum_data-wiki_gptextsum_results\"\n\nrouge= {'rouge1': 0.47135937656590865, 'rouge2': 0.25940674061381325, 'rougeL': 0.38504581702889945, 'rougeLsum': 0.38504581702889945}\n\nbert= {'precision': 0.788231149315834, 'recall': 0.7805202662944793, 'f1': 0.7840157747268677}"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"GPTextSum_data-wiki_gptextsum_results\"\n\nrouge= {'rouge1': 0.47135937656590865, 'rouge2': 0.25940674061381325, 'rougeL': 0.38504581702889945, 'rougeLsum': 0.38504581702889945}\n\nbert= {'precision': 0.788231149315834, 'recall': 0.7805202662944793, 'f1': 0.7840157747268677}"
]
| [
6,
133
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"GPTextSum_data-wiki_gptextsum_results\"\n\nrouge= {'rouge1': 0.47135937656590865, 'rouge2': 0.25940674061381325, 'rougeL': 0.38504581702889945, 'rougeLsum': 0.38504581702889945}\n\nbert= {'precision': 0.788231149315834, 'recall': 0.7805202662944793, 'f1': 0.7840157747268677}"
]
|
93820071a7121f67222df65feb48397487721929 | # Dataset Card for "temario_data-wiki_gptextsum_results"
rouge= {'rouge1': 0.21036975294101332, 'rouge2': 0.07970392536191843, 'rougeL': 0.1477604081207584, 'rougeLsum': 0.1477604081207584}
bert= {'precision': 0.7488837575912476, 'recall': 0.6433243179321289, 'f1': 0.6917135095596314} | arthurmluz/temario_data-wiki_gptextsum_results | [
"region:us"
]
| 2023-11-08T17:55:00+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}], "splits": [{"name": "validation", "num_bytes": 208005, "num_examples": 25}], "download_size": 164069, "dataset_size": 208005}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-08T17:55:23+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "temario_data-wiki_gptextsum_results"
rouge= {'rouge1': 0.21036975294101332, 'rouge2': 0.07970392536191843, 'rougeL': 0.1477604081207584, 'rougeLsum': 0.1477604081207584}
bert= {'precision': 0.7488837575912476, 'recall': 0.6433243179321289, 'f1': 0.6917135095596314} | [
"# Dataset Card for \"temario_data-wiki_gptextsum_results\"\n\nrouge= {'rouge1': 0.21036975294101332, 'rouge2': 0.07970392536191843, 'rougeL': 0.1477604081207584, 'rougeLsum': 0.1477604081207584}\n\nbert= {'precision': 0.7488837575912476, 'recall': 0.6433243179321289, 'f1': 0.6917135095596314}"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"temario_data-wiki_gptextsum_results\"\n\nrouge= {'rouge1': 0.21036975294101332, 'rouge2': 0.07970392536191843, 'rougeL': 0.1477604081207584, 'rougeLsum': 0.1477604081207584}\n\nbert= {'precision': 0.7488837575912476, 'recall': 0.6433243179321289, 'f1': 0.6917135095596314}"
]
| [
6,
128
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"temario_data-wiki_gptextsum_results\"\n\nrouge= {'rouge1': 0.21036975294101332, 'rouge2': 0.07970392536191843, 'rougeL': 0.1477604081207584, 'rougeLsum': 0.1477604081207584}\n\nbert= {'precision': 0.7488837575912476, 'recall': 0.6433243179321289, 'f1': 0.6917135095596314}"
]
|
e85f805f9fd8814687d56b47e67fb3f8d9ba64a2 | # Dataset Card for "es-0712-no-demoji-s"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gg-ai/es-0712-no-demoji-s | [
"region:us"
]
| 2023-11-08T17:56:04+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "clean_text", "dtype": "string"}, {"name": "sent", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 217036, "num_examples": 641}, {"name": "test", "num_bytes": 46081, "num_examples": 136}, {"name": "val", "num_bytes": 9192, "num_examples": 25}], "download_size": 189749, "dataset_size": 272309}} | 2023-11-08T17:56:09+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "es-0712-no-demoji-s"
More Information needed | [
"# Dataset Card for \"es-0712-no-demoji-s\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"es-0712-no-demoji-s\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"es-0712-no-demoji-s\"\n\nMore Information needed"
]
|
0ffb3e5ead5646ea6618bfe60121bccb1cc8e4f2 | # Dataset Card for "DNN-vuln"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | aneeshas/DNN-vuln | [
"region:us"
]
| 2023-11-08T17:59:02+00:00 | {"dataset_info": {"features": [{"name": "past_values", "sequence": "float32"}, {"name": "past_time_features", "sequence": "float32"}, {"name": "past_observed_mask", "sequence": "float32"}, {"name": "future_values", "sequence": "float32"}, {"name": "future_time_features", "sequence": "float32"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 582168, "num_examples": 1016}, {"name": "eval", "num_bytes": 834861, "num_examples": 1457}], "download_size": 722384, "dataset_size": 1417029}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "eval", "path": "data/eval-*"}]}]} | 2023-11-09T19:04:14+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "DNN-vuln"
More Information needed | [
"# Dataset Card for \"DNN-vuln\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"DNN-vuln\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"DNN-vuln\"\n\nMore Information needed"
]
|
4cc13b87acd98bb136aa86ad9bbea76bd72a7741 | Dataset using the bert-cased tokenizer, cutoff at 512 tokens.
Merge of the books and wikipedia datasets:
https://huggingface.co/datasets/gmongaras/book_BERT_512
https://huggingface.co/datasets/gmongaras/wikipedia_BERT_512 | gmongaras/wikipedia_book_BERT_512 | [
"region:us"
]
| 2023-11-08T17:59:13+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 248147577432, "num_examples": 80462898}], "download_size": 7045365171, "dataset_size": 248147577432}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-08T20:21:28+00:00 | []
| []
| TAGS
#region-us
| Dataset using the bert-cased tokenizer, cutoff at 512 tokens.
Merge of the books and wikipedia datasets:
URL
URL | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
10fc871372076f31cf8548b3174fbcb569d9fd4a | # Dataset Card for "bw_spec_cls_80_17"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_17 | [
"region:us"
]
| 2023-11-08T18:00:31+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "44849", "1": "44850", "2": "44851", "3": "44853", "4": "44854", "5": "44917", "6": "44918", "7": "44946", "8": "44947", "9": "44948", "10": "44949", "11": "44950", "12": "44951", "13": "44952", "14": "45099", "15": "45100", "16": "45101", "17": "45102", "18": "45103", "19": "45150", "20": "45151", "21": "45153", "22": "45154", "23": "45335", "24": "45387", "25": "45388", "26": "45389", "27": "45390", "28": "45391", "29": "45392", "30": "45393", "31": "45474", "32": "45475", "33": "45508", "34": "45513", "35": "45514", "36": "45515", "37": "45516", "38": "45517", "39": "45518", "40": "45519", "41": "45520", "42": "45521", "43": "45522", "44": "45523", "45": "45941", "46": "46024", "47": "46043", "48": "46058", "49": "46068", "50": "46078", "51": "46079", "52": "46157", "53": "46158", "54": "46159", "55": "46160", "56": "46161", "57": "46162", "58": "46238", "59": "46241", "60": "46525", "61": "46611", "62": "46711", "63": "46726", "64": "46732", "65": "46733", "66": "46736", "67": "46839", "68": "46840", "69": "46841", "70": "46842", "71": "46844", "72": "46846", "73": "46854", "74": "46855", "75": "46928", "76": "46930", "77": "47068", "78": "47069", "79": "47070"}}}}], "splits": [{"name": "train", "num_bytes": 87575273.6, "num_examples": 1600}], "download_size": 87221336, "dataset_size": 87575273.6}} | 2023-11-08T18:00:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_17"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_17\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_17\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_17\"\n\nMore Information needed"
]
|
44f1382cd0e056d1ede9cb4e58ce914988ee16b7 | # Dataset Card for "arxiv_abstracts_2021_short"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | determined-ai/arxiv_abstracts_2021_short | [
"region:us"
]
| 2023-11-08T18:05:07+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 98194924, "num_examples": 261634}], "download_size": 60007305, "dataset_size": 98194924}} | 2023-11-08T18:05:14+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "arxiv_abstracts_2021_short"
More Information needed | [
"# Dataset Card for \"arxiv_abstracts_2021_short\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"arxiv_abstracts_2021_short\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"arxiv_abstracts_2021_short\"\n\nMore Information needed"
]
|
197e806acece29ff481cddb429bc228b665fab48 | # Dataset Card for "temario_data-xlsum_gptextsum_results"
rouge= {'rouge1': 0.17896239871319466, 'rouge2': 0.06855215321331638, 'rougeL': 0.12007675671332124, 'rougeLsum': 0.12007675671332124}
bert= {'precision': 0.738992075920105, 'recall': 0.6221855568885803, 'f1': 0.6748717761039734} | arthurmluz/temario_data-xlsum_gptextsum_results | [
"region:us"
]
| 2023-11-08T18:13:22+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}], "splits": [{"name": "validation", "num_bytes": 206481, "num_examples": 25}], "download_size": 163467, "dataset_size": 206481}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-08T18:19:54+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "temario_data-xlsum_gptextsum_results"
rouge= {'rouge1': 0.17896239871319466, 'rouge2': 0.06855215321331638, 'rougeL': 0.12007675671332124, 'rougeLsum': 0.12007675671332124}
bert= {'precision': 0.738992075920105, 'recall': 0.6221855568885803, 'f1': 0.6748717761039734} | [
"# Dataset Card for \"temario_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.17896239871319466, 'rouge2': 0.06855215321331638, 'rougeL': 0.12007675671332124, 'rougeLsum': 0.12007675671332124}\n\nbert= {'precision': 0.738992075920105, 'recall': 0.6221855568885803, 'f1': 0.6748717761039734}"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"temario_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.17896239871319466, 'rouge2': 0.06855215321331638, 'rougeL': 0.12007675671332124, 'rougeLsum': 0.12007675671332124}\n\nbert= {'precision': 0.738992075920105, 'recall': 0.6221855568885803, 'f1': 0.6748717761039734}"
]
| [
6,
131
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"temario_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.17896239871319466, 'rouge2': 0.06855215321331638, 'rougeL': 0.12007675671332124, 'rougeLsum': 0.12007675671332124}\n\nbert= {'precision': 0.738992075920105, 'recall': 0.6221855568885803, 'f1': 0.6748717761039734}"
]
|
6bb351c24d9752c82856e0e8ca9faf92f2bd74e3 | # Dataset Card for "GPTextSum_data-xlsum_gptextsum_results"
rouge= {'rouge1': 0.4703587157805254, 'rouge2': 0.2470694420438901, 'rougeL': 0.3578380795546488, 'rougeLsum': 0.3578380795546488}
bert= {'precision': 0.7796857297420502, 'recall': 0.7719294220209122, 'f1': 0.775640431046486} | arthurmluz/GPTextSum_data-xlsum_gptextsum_results | [
"region:us"
]
| 2023-11-08T18:20:48+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}], "splits": [{"name": "validation", "num_bytes": 28194, "num_examples": 20}], "download_size": 35257, "dataset_size": 28194}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-08T18:23:27+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "GPTextSum_data-xlsum_gptextsum_results"
rouge= {'rouge1': 0.4703587157805254, 'rouge2': 0.2470694420438901, 'rougeL': 0.3578380795546488, 'rougeLsum': 0.3578380795546488}
bert= {'precision': 0.7796857297420502, 'recall': 0.7719294220209122, 'f1': 0.775640431046486} | [
"# Dataset Card for \"GPTextSum_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.4703587157805254, 'rouge2': 0.2470694420438901, 'rougeL': 0.3578380795546488, 'rougeLsum': 0.3578380795546488}\n\nbert= {'precision': 0.7796857297420502, 'recall': 0.7719294220209122, 'f1': 0.775640431046486}"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"GPTextSum_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.4703587157805254, 'rouge2': 0.2470694420438901, 'rougeL': 0.3578380795546488, 'rougeLsum': 0.3578380795546488}\n\nbert= {'precision': 0.7796857297420502, 'recall': 0.7719294220209122, 'f1': 0.775640431046486}"
]
| [
6,
132
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"GPTextSum_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.4703587157805254, 'rouge2': 0.2470694420438901, 'rougeL': 0.3578380795546488, 'rougeLsum': 0.3578380795546488}\n\nbert= {'precision': 0.7796857297420502, 'recall': 0.7719294220209122, 'f1': 0.775640431046486}"
]
|
bb3492ef511923114897cb28b965709356466518 | # Dataset Card for "bw_spec_cls_80_18"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_18 | [
"region:us"
]
| 2023-11-08T18:25:26+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "47071", "1": "47072", "2": "47073", "3": "47074", "4": "47075", "5": "47076", "6": "47077", "7": "47192", "8": "47193", "9": "47194", "10": "47195", "11": "47196", "12": "47197", "13": "47198", "14": "47199", "15": "47200", "16": "47201", "17": "47202", "18": "47260", "19": "47506", "20": "47510", "21": "47657", "22": "47658", "23": "47659", "24": "47660", "25": "47661", "26": "47662", "27": "47663", "28": "47665", "29": "47666", "30": "47670", "31": "47671", "32": "47707", "33": "47826", "34": "47835", "35": "47865", "36": "47868", "37": "47894", "38": "47895", "39": "47896", "40": "47897", "41": "47916", "42": "47921", "43": "48015", "44": "48042", "45": "48043", "46": "48044", "47": "48046", "48": "48269", "49": "48307", "50": "48367", "51": "48368", "52": "48369", "53": "48437", "54": "48439", "55": "48440", "56": "48442", "57": "48443", "58": "48444", "59": "48446", "60": "48450", "61": "48452", "62": "48453", "63": "48454", "64": "48456", "65": "48457", "66": "48462", "67": "48463", "68": "48464", "69": "48465", "70": "48466", "71": "48488", "72": "48489", "73": "48491", "74": "48492", "75": "48493", "76": "48494", "77": "48763", "78": "48808", "79": "48815"}}}}], "splits": [{"name": "train", "num_bytes": 87516568.0, "num_examples": 1600}], "download_size": 87788761, "dataset_size": 87516568.0}} | 2023-11-08T18:25:35+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_18"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_18\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_18\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_18\"\n\nMore Information needed"
]
|
02f80b3afabdaccb63b62fea3a19ab0cd3bee313 | # Dataset Card for "cstnews_data-xlsum_gptextsum_results"
rouge= {'rouge1': 0.37778216487327, 'rouge2': 0.23117332707382252, 'rougeL': 0.2975993612013336, 'rougeLsum': 0.2975993612013336}
bert= {'precision': 0.8107486665248871, 'recall': 0.7297985441982746, 'f1': 0.7672242373228073} | arthurmluz/cstnews_data-xlsum_gptextsum_results | [
"region:us"
]
| 2023-11-08T18:26:02+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}], "splits": [{"name": "validation", "num_bytes": 51435, "num_examples": 16}], "download_size": 48551, "dataset_size": 51435}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-08T18:34:06+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "cstnews_data-xlsum_gptextsum_results"
rouge= {'rouge1': 0.37778216487327, 'rouge2': 0.23117332707382252, 'rougeL': 0.2975993612013336, 'rougeLsum': 0.2975993612013336}
bert= {'precision': 0.8107486665248871, 'recall': 0.7297985441982746, 'f1': 0.7672242373228073} | [
"# Dataset Card for \"cstnews_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.37778216487327, 'rouge2': 0.23117332707382252, 'rougeL': 0.2975993612013336, 'rougeLsum': 0.2975993612013336}\n\nbert= {'precision': 0.8107486665248871, 'recall': 0.7297985441982746, 'f1': 0.7672242373228073}"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"cstnews_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.37778216487327, 'rouge2': 0.23117332707382252, 'rougeL': 0.2975993612013336, 'rougeLsum': 0.2975993612013336}\n\nbert= {'precision': 0.8107486665248871, 'recall': 0.7297985441982746, 'f1': 0.7672242373228073}"
]
| [
6,
133
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"cstnews_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.37778216487327, 'rouge2': 0.23117332707382252, 'rougeL': 0.2975993612013336, 'rougeLsum': 0.2975993612013336}\n\nbert= {'precision': 0.8107486665248871, 'recall': 0.7297985441982746, 'f1': 0.7672242373228073}"
]
|
06368a97bee6badbfc88536ffa3cc4c4b9718027 | # Dataset Card for "DeciLM-qlora-blog-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | harpreetsahota/DeciLM-qlora-blog-dataset | [
"region:us"
]
| 2023-11-08T18:31:29+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "generated_response", "sequence": "string"}, {"name": "generation_time", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 315161, "num_examples": 100}], "download_size": 184210, "dataset_size": 315161}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-08T18:31:30+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "DeciLM-qlora-blog-dataset"
More Information needed | [
"# Dataset Card for \"DeciLM-qlora-blog-dataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"DeciLM-qlora-blog-dataset\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"DeciLM-qlora-blog-dataset\"\n\nMore Information needed"
]
|
7a90a5401fc7ce1f2ac0034abadb68096bd91f8c | # Dataset Card for "bw_spec_cls_80_19"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_19 | [
"region:us"
]
| 2023-11-08T18:49:46+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "48861", "1": "48862", "2": "48863", "3": "48864", "4": "48865", "5": "48931", "6": "49029", "7": "49030", "8": "49039", "9": "49061", "10": "49062", "11": "49064", "12": "49066", "13": "49067", "14": "49068", "15": "49070", "16": "49071", "17": "49072", "18": "49073", "19": "49394", "20": "49401", "21": "49407", "22": "49408", "23": "49473", "24": "49476", "25": "49478", "26": "49812", "27": "49817", "28": "49842", "29": "49843", "30": "49844", "31": "49845", "32": "49846", "33": "49847", "34": "49848", "35": "49849", "36": "49856", "37": "49857", "38": "50264", "39": "50272", "40": "50276", "41": "50283", "42": "50323", "43": "50539", "44": "50543", "45": "50836", "46": "50952", "47": "50955", "48": "50956", "49": "51004", "50": "51005", "51": "51006", "52": "51111", "53": "51112", "54": "51113", "55": "51114", "56": "51115", "57": "51117", "58": "51118", "59": "51120", "60": "51203", "61": "51262", "62": "51263", "63": "51265", "64": "51267", "65": "51268", "66": "51269", "67": "51271", "68": "51273", "69": "51274", "70": "51275", "71": "51276", "72": "51333", "73": "51479", "74": "51776", "75": "51784", "76": "51785", "77": "51923", "78": "51954", "79": "51991"}}}}], "splits": [{"name": "train", "num_bytes": 85289288.0, "num_examples": 1600}], "download_size": 85456295, "dataset_size": 85289288.0}} | 2023-11-08T18:49:53+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_19"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_19\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_19\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_19\"\n\nMore Information needed"
]
|
2d3ec6207e67cd8ccf712308cba696b078e144c0 | # Dataset Card for "consumer_complaints_short"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | determined-ai/consumer_complaints_short | [
"region:us"
]
| 2023-11-08T18:52:13+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Issue", "dtype": "string"}, {"name": "Consumer Complaint", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3965527, "num_examples": 22610}, {"name": "test", "num_bytes": 1342624, "num_examples": 7631}], "download_size": 2481164, "dataset_size": 5308151}} | 2023-11-08T18:52:16+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "consumer_complaints_short"
More Information needed | [
"# Dataset Card for \"consumer_complaints_short\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"consumer_complaints_short\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"consumer_complaints_short\"\n\nMore Information needed"
]
|
be495584e53fe165be9e6723895fb7a1f8f0b83b |
# KOR-OpenOrca-Platypus-v3
- KOR-OpenOrca-Platypus 데이터셋에서 수작업으로 번역 오류 200건 이상을 고친 데이터셋.
- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다😭😭
## KOpen-platpyus
Repo: [KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus)
- 고품질 한국어 데이터셋
1. 코드와 주석은 그대로 유지하고, 설명 부분만 한국어로 수정
2. 1번과 더불어서, Python, Java, Cpp, xml 등등 결과들은 전부 기존의 데이터 형태로 최대한 보존
3. 단일 숫자와 영어는 본래의 결과 그대로 가져옴
4. DeepL Pro 번역 결과 중 미완성 변역 결과 직접 수정(예를 들면, '[...]'가 포함되어 있음)
5. DeepL Pro 번역 결과가 본래의 데이터에 비해 글자수가 50% 이하로 낮으면, 번역 결과 수정
6. 번역하고자 하는 글자수가 1500자 이상일 경우, API로 변경해서 번역
7. 고유명사는 최대한 유지함
> Post-processing 작업 내용
- Add post-processing (v2)
+) 단답형 Task 삭제.
## OpenOrca-Ko-v2
1. NIV // 약 1500개
2. FLAN // 약 9000개
3. T0 // 약 6000개
4. CoT // 약 2000개
> Dataset 구성
- 수작업으로 고친 내용(v2)
1. 영어로 된 답변 수정. (Ex. Nick -> 닉, Lucky -> 운이 좋음, ...)
2. KoCoT 데이터셋 제거.
3. Yes, True, False 등등 일부 답변 수정
> Post-processing 작업 내용
## Translation
Using DeepL Pro API. Thanks.
---
>Below is original dataset card
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
<p><h1>🐋 The OpenOrca Dataset! 🐋</h1></p>

<a name="dataset-announcement"></a>
We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707).
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## OpenOrca-Platypus2-13B
Our [latest release](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
[OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
<a name="dataset-summary"></a>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="dataset-attribution"></a>
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
http://AlignmentLab.ai:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx
Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
<a name="supported-tasks-and-leaderboards"></a>
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
<a name="languages"></a>
# Languages
The language of the data is primarily English.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
<a name="data-splits"></a>
## Data Splits
The data is unsplit.
<a name="dataset-creation"></a>
# Dataset Creation
<a name="curation-rationale"></a>
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
<a name="source-data"></a>
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
<a name="dataset-use"></a>
# Dataset Use
<a name="use-cases"></a>
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
<a name="usage-caveats"></a>
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
<a name="getting-started"></a>
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
``` | kyujinpy/KOR-OpenOrca-Platypus-v3 | [
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<50K",
"language:ko",
"license:cc-by-nc-4.0",
"arxiv:2306.02707",
"arxiv:2301.13688",
"region:us"
]
| 2023-11-08T18:56:08+00:00 | {"language": ["ko"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<50K"], "task_categories": ["conversational", "text-classification", "token-classification", "table-question-answering", "question-answering", "zero-shot-classification", "summarization", "feature-extraction", "text-generation", "text2text-generation"], "pretty_name": "OpenOrca", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 34214}]}} | 2023-11-18T20:22:23+00:00 | [
"2306.02707",
"2301.13688"
]
| [
"ko"
]
| TAGS
#task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<50K #language-Korean #license-cc-by-nc-4.0 #arxiv-2306.02707 #arxiv-2301.13688 #region-us
|
# KOR-OpenOrca-Platypus-v3
- KOR-OpenOrca-Platypus 데이터셋에서 수작업으로 번역 오류 200건 이상을 고친 데이터셋.
- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다
## KOpen-platpyus
Repo: KOpen-platypus
- 고품질 한국어 데이터셋
1. 코드와 주석은 그대로 유지하고, 설명 부분만 한국어로 수정
2. 1번과 더불어서, Python, Java, Cpp, xml 등등 결과들은 전부 기존의 데이터 형태로 최대한 보존
3. 단일 숫자와 영어는 본래의 결과 그대로 가져옴
4. DeepL Pro 번역 결과 중 미완성 변역 결과 직접 수정(예를 들면, '[...]'가 포함되어 있음)
5. DeepL Pro 번역 결과가 본래의 데이터에 비해 글자수가 50% 이하로 낮으면, 번역 결과 수정
6. 번역하고자 하는 글자수가 1500자 이상일 경우, API로 변경해서 번역
7. 고유명사는 최대한 유지함
> Post-processing 작업 내용
- Add post-processing (v2)
+) 단답형 Task 삭제.
## OpenOrca-Ko-v2
1. NIV // 약 1500개
2. FLAN // 약 9000개
3. T0 // 약 6000개
4. CoT // 약 2000개
> Dataset 구성
- 수작업으로 고친 내용(v2)
1. 영어로 된 답변 수정. (Ex. Nick -> 닉, Lucky -> 운이 좋음, ...)
2. KoCoT 데이터셋 제거.
3. Yes, True, False 등등 일부 답변 수정
> Post-processing 작업 내용
## Translation
Using DeepL Pro API. Thanks.
---
>Below is original dataset card
## Table of Contents
- Dataset Summary
- Dataset Attribution
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Dataset Use
- Use Cases
- Usage Caveats
- Getting Started
<p><h1> The OpenOrca Dataset! </h1></p>
!OpenOrca Logo
<a name="dataset-announcement"></a>
We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## OpenOrca-Platypus2-13B
Our latest release, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our second model, highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
OpenOrca-Preview1-13B
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
<a name="dataset-summary"></a>
# Dataset Summary
The OpenOrca dataset is a collection of augmented FLAN Collection data.
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="dataset-attribution"></a>
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
URL:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
URL URL
Want to visualize our full dataset? Check out our Nomic Atlas Map.
<img src="URL alt="Atlas Nomic Dataset Map" width="400" height="400" />
<a name="supported-tasks-and-leaderboards"></a>
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
<a name="languages"></a>
# Languages
The language of the data is primarily English.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
<a name="data-splits"></a>
## Data Splits
The data is unsplit.
<a name="dataset-creation"></a>
# Dataset Creation
<a name="curation-rationale"></a>
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
<a name="source-data"></a>
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021.
These are referenced by the official FLAN Collection repo as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
<a name="dataset-use"></a>
# Dataset Use
<a name="use-cases"></a>
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
<a name="usage-caveats"></a>
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
<a name="getting-started"></a>
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
| [
"# KOR-OpenOrca-Platypus-v3\n- KOR-OpenOrca-Platypus 데이터셋에서 수작업으로 번역 오류 200건 이상을 고친 데이터셋. \n- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다",
"## KOpen-platpyus\nRepo: KOpen-platypus\n\n- 고품질 한국어 데이터셋 \n1. 코드와 주석은 그대로 유지하고, 설명 부분만 한국어로 수정\n2. 1번과 더불어서, Python, Java, Cpp, xml 등등 결과들은 전부 기존의 데이터 형태로 최대한 보존\n3. 단일 숫자와 영어는 본래의 결과 그대로 가져옴\n4. DeepL Pro 번역 결과 중 미완성 변역 결과 직접 수정(예를 들면, '[...]'가 포함되어 있음)\n5. DeepL Pro 번역 결과가 본래의 데이터에 비해 글자수가 50% 이하로 낮으면, 번역 결과 수정\n6. 번역하고자 하는 글자수가 1500자 이상일 경우, API로 변경해서 번역\n7. 고유명사는 최대한 유지함\n> Post-processing 작업 내용 \n\n- Add post-processing (v2) \n+) 단답형 Task 삭제.",
"## OpenOrca-Ko-v2 \n1. NIV // 약 1500개\n2. FLAN // 약 9000개\n3. T0 // 약 6000개\n4. CoT // 약 2000개\n> Dataset 구성 \n \n- 수작업으로 고친 내용(v2) \n1. 영어로 된 답변 수정. (Ex. Nick -> 닉, Lucky -> 운이 좋음, ...) \n2. KoCoT 데이터셋 제거. \n3. Yes, True, False 등등 일부 답변 수정 \n> Post-processing 작업 내용",
"## Translation\nUsing DeepL Pro API. Thanks.\n\n---\n>Below is original dataset card",
"## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\n\n\n<p><h1> The OpenOrca Dataset! </h1></p>\n\n!OpenOrca Logo\n\n<a name=\"dataset-announcement\"></a>\n\nWe are thrilled to announce the release of the OpenOrca dataset!\nThis rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.\nIt has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!",
"# Official Models",
"## OpenOrca-Platypus2-13B\n\nOur latest release, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!\nReleased in partnership with Platypus.",
"## LlongOrca 7B & 13B\n\n* Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.\n* LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.",
"## OpenOrcaxOpenChat-Preview2-13B\n\nOur second model, highlighting that we've surpassed the performance reported in the Orca paper.\nWas #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.\nReleased in partnership with OpenChat.",
"## OpenOrca-Preview1-13B\n\nOpenOrca-Preview1-13B\nThis model was trained in less than a day, for <$200, with <10% of our data.\nAt release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.\n\n<a name=\"dataset-summary\"></a>",
"# Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n<a name=\"dataset-attribution\"></a>",
"# Dataset Attribution\n\nWe would like to give special recognition to the following contributors for their significant efforts and dedication:\n \n\n Teknium \n WingLian/Caseus\n Eric Hartford\n NanoBit\n Pankaj\n Winddude\n Rohan\n\n URL:\n Autometa\n Entropi\n AtlasUnified\n NeverendingToast\n NanoBit\n WingLian/Caseus\n\nAlso of course, as always, TheBloke, for being the backbone of the whole community.\n\nMany thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! \n\nWe are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:\nURL URL\n\nWant to visualize our full dataset? Check out our Nomic Atlas Map.\n <img src=\"URL alt=\"Atlas Nomic Dataset Map\" width=\"400\" height=\"400\" />\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a>",
"# Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a>",
"# Languages\n\nThe language of the data is primarily English.\n\n<a name=\"dataset-structure\"></a>",
"# Dataset Structure\n\n<a name=\"data-instances\"></a>",
"## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>",
"## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>",
"## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a>",
"# Dataset Creation\n\n<a name=\"curation-rationale\"></a>",
"## Curation Rationale\n\nThe dataset was created to provide a source of augmented text data for researchers and developers.\nThe datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.\nThis \"reasoning trace\" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.\n\n<a name=\"source-data\"></a>",
"## Source Data\n\nThe data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:\n\n1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.\n We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.\n2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021.\n These are referenced by the official FLAN Collection repo as the preferred data source.\n However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.\n\nCombined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.\n\n<a name=\"dataset-use\"></a>",
"# Dataset Use\n\n<a name=\"use-cases\"></a>",
"## Use Cases\n\nThe dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.\n\n<a name=\"usage-caveats\"></a>",
"## Usage Caveats\n\nGiven that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.\nFurther, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.\n\n<a name=\"getting-started\"></a>",
"## Getting Started\n\nThis dataset is organized such that it can be naively loaded via Hugging Face datasets library.\nWe recommend using streaming due to the large size of the files.\nRegular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face."
]
| [
"TAGS\n#task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<50K #language-Korean #license-cc-by-nc-4.0 #arxiv-2306.02707 #arxiv-2301.13688 #region-us \n",
"# KOR-OpenOrca-Platypus-v3\n- KOR-OpenOrca-Platypus 데이터셋에서 수작업으로 번역 오류 200건 이상을 고친 데이터셋. \n- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다",
"## KOpen-platpyus\nRepo: KOpen-platypus\n\n- 고품질 한국어 데이터셋 \n1. 코드와 주석은 그대로 유지하고, 설명 부분만 한국어로 수정\n2. 1번과 더불어서, Python, Java, Cpp, xml 등등 결과들은 전부 기존의 데이터 형태로 최대한 보존\n3. 단일 숫자와 영어는 본래의 결과 그대로 가져옴\n4. DeepL Pro 번역 결과 중 미완성 변역 결과 직접 수정(예를 들면, '[...]'가 포함되어 있음)\n5. DeepL Pro 번역 결과가 본래의 데이터에 비해 글자수가 50% 이하로 낮으면, 번역 결과 수정\n6. 번역하고자 하는 글자수가 1500자 이상일 경우, API로 변경해서 번역\n7. 고유명사는 최대한 유지함\n> Post-processing 작업 내용 \n\n- Add post-processing (v2) \n+) 단답형 Task 삭제.",
"## OpenOrca-Ko-v2 \n1. NIV // 약 1500개\n2. FLAN // 약 9000개\n3. T0 // 약 6000개\n4. CoT // 약 2000개\n> Dataset 구성 \n \n- 수작업으로 고친 내용(v2) \n1. 영어로 된 답변 수정. (Ex. Nick -> 닉, Lucky -> 운이 좋음, ...) \n2. KoCoT 데이터셋 제거. \n3. Yes, True, False 등등 일부 답변 수정 \n> Post-processing 작업 내용",
"## Translation\nUsing DeepL Pro API. Thanks.\n\n---\n>Below is original dataset card",
"## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\n\n\n<p><h1> The OpenOrca Dataset! </h1></p>\n\n!OpenOrca Logo\n\n<a name=\"dataset-announcement\"></a>\n\nWe are thrilled to announce the release of the OpenOrca dataset!\nThis rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.\nIt has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!",
"# Official Models",
"## OpenOrca-Platypus2-13B\n\nOur latest release, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!\nReleased in partnership with Platypus.",
"## LlongOrca 7B & 13B\n\n* Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.\n* LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.",
"## OpenOrcaxOpenChat-Preview2-13B\n\nOur second model, highlighting that we've surpassed the performance reported in the Orca paper.\nWas #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.\nReleased in partnership with OpenChat.",
"## OpenOrca-Preview1-13B\n\nOpenOrca-Preview1-13B\nThis model was trained in less than a day, for <$200, with <10% of our data.\nAt release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.\n\n<a name=\"dataset-summary\"></a>",
"# Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n<a name=\"dataset-attribution\"></a>",
"# Dataset Attribution\n\nWe would like to give special recognition to the following contributors for their significant efforts and dedication:\n \n\n Teknium \n WingLian/Caseus\n Eric Hartford\n NanoBit\n Pankaj\n Winddude\n Rohan\n\n URL:\n Autometa\n Entropi\n AtlasUnified\n NeverendingToast\n NanoBit\n WingLian/Caseus\n\nAlso of course, as always, TheBloke, for being the backbone of the whole community.\n\nMany thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! \n\nWe are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:\nURL URL\n\nWant to visualize our full dataset? Check out our Nomic Atlas Map.\n <img src=\"URL alt=\"Atlas Nomic Dataset Map\" width=\"400\" height=\"400\" />\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a>",
"# Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a>",
"# Languages\n\nThe language of the data is primarily English.\n\n<a name=\"dataset-structure\"></a>",
"# Dataset Structure\n\n<a name=\"data-instances\"></a>",
"## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>",
"## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>",
"## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a>",
"# Dataset Creation\n\n<a name=\"curation-rationale\"></a>",
"## Curation Rationale\n\nThe dataset was created to provide a source of augmented text data for researchers and developers.\nThe datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.\nThis \"reasoning trace\" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.\n\n<a name=\"source-data\"></a>",
"## Source Data\n\nThe data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:\n\n1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.\n We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.\n2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021.\n These are referenced by the official FLAN Collection repo as the preferred data source.\n However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.\n\nCombined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.\n\n<a name=\"dataset-use\"></a>",
"# Dataset Use\n\n<a name=\"use-cases\"></a>",
"## Use Cases\n\nThe dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.\n\n<a name=\"usage-caveats\"></a>",
"## Usage Caveats\n\nGiven that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.\nFurther, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.\n\n<a name=\"getting-started\"></a>",
"## Getting Started\n\nThis dataset is organized such that it can be naively loaded via Hugging Face datasets library.\nWe recommend using streaming due to the large size of the files.\nRegular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face."
]
| [
169,
71,
186,
104,
20,
199,
4,
48,
98,
67,
95,
122,
233,
86,
25,
19,
67,
153,
24,
18,
146,
235,
16,
46,
70,
66
]
| [
"passage: TAGS\n#task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<50K #language-Korean #license-cc-by-nc-4.0 #arxiv-2306.02707 #arxiv-2301.13688 #region-us \n# KOR-OpenOrca-Platypus-v3\n- KOR-OpenOrca-Platypus 데이터셋에서 수작업으로 번역 오류 200건 이상을 고친 데이터셋. \n- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다## KOpen-platpyus\nRepo: KOpen-platypus\n\n- 고품질 한국어 데이터셋 \n1. 코드와 주석은 그대로 유지하고, 설명 부분만 한국어로 수정\n2. 1번과 더불어서, Python, Java, Cpp, xml 등등 결과들은 전부 기존의 데이터 형태로 최대한 보존\n3. 단일 숫자와 영어는 본래의 결과 그대로 가져옴\n4. DeepL Pro 번역 결과 중 미완성 변역 결과 직접 수정(예를 들면, '[...]'가 포함되어 있음)\n5. DeepL Pro 번역 결과가 본래의 데이터에 비해 글자수가 50% 이하로 낮으면, 번역 결과 수정\n6. 번역하고자 하는 글자수가 1500자 이상일 경우, API로 변경해서 번역\n7. 고유명사는 최대한 유지함\n> Post-processing 작업 내용 \n\n- Add post-processing (v2) \n+) 단답형 Task 삭제.",
"passage: ## OpenOrca-Ko-v2 \n1. NIV // 약 1500개\n2. FLAN // 약 9000개\n3. T0 // 약 6000개\n4. CoT // 약 2000개\n> Dataset 구성 \n \n- 수작업으로 고친 내용(v2) \n1. 영어로 된 답변 수정. (Ex. Nick -> 닉, Lucky -> 운이 좋음, ...) \n2. KoCoT 데이터셋 제거. \n3. Yes, True, False 등등 일부 답변 수정 \n> Post-processing 작업 내용## Translation\nUsing DeepL Pro API. Thanks.\n\n---\n>Below is original dataset card## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\n\n\n<p><h1> The OpenOrca Dataset! </h1></p>\n\n!OpenOrca Logo\n\n<a name=\"dataset-announcement\"></a>\n\nWe are thrilled to announce the release of the OpenOrca dataset!\nThis rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.\nIt has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!# Official Models## OpenOrca-Platypus2-13B\n\nOur latest release, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!\nReleased in partnership with Platypus.## LlongOrca 7B & 13B\n\n* Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.\n* LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.## OpenOrcaxOpenChat-Preview2-13B\n\nOur second model, highlighting that we've surpassed the performance reported in the Orca paper.\nWas #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.\nReleased in partnership with OpenChat.",
"passage: ## OpenOrca-Preview1-13B\n\nOpenOrca-Preview1-13B\nThis model was trained in less than a day, for <$200, with <10% of our data.\nAt release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.\n\n<a name=\"dataset-summary\"></a># Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n<a name=\"dataset-attribution\"></a># Dataset Attribution\n\nWe would like to give special recognition to the following contributors for their significant efforts and dedication:\n \n\n Teknium \n WingLian/Caseus\n Eric Hartford\n NanoBit\n Pankaj\n Winddude\n Rohan\n\n URL:\n Autometa\n Entropi\n AtlasUnified\n NeverendingToast\n NanoBit\n WingLian/Caseus\n\nAlso of course, as always, TheBloke, for being the backbone of the whole community.\n\nMany thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! \n\nWe are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:\nURL URL\n\nWant to visualize our full dataset? Check out our Nomic Atlas Map.\n <img src=\"URL alt=\"Atlas Nomic Dataset Map\" width=\"400\" height=\"400\" />\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a># Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a># Languages\n\nThe language of the data is primarily English.\n\n<a name=\"dataset-structure\"></a># Dataset Structure\n\n<a name=\"data-instances\"></a>",
"passage: ## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a># Dataset Creation\n\n<a name=\"curation-rationale\"></a>## Curation Rationale\n\nThe dataset was created to provide a source of augmented text data for researchers and developers.\nThe datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.\nThis \"reasoning trace\" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.\n\n<a name=\"source-data\"></a>"
]
|
52829c999e21207a0f8aa863cc6ffde7275db8bd | # Dataset Card for "mind_recommendation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jiuyuan/mind_recommendation | [
"region:us"
]
| 2023-11-08T19:05:23+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4687117, "num_examples": 993}], "download_size": 1214223, "dataset_size": 4687117}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-08T20:15:09+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mind_recommendation"
More Information needed | [
"# Dataset Card for \"mind_recommendation\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mind_recommendation\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mind_recommendation\"\n\nMore Information needed"
]
|
14b842bfb145e7840146cdb4a090fe632c2f2570 | # Dataset Card for "humansleepproject-rr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | emi429/humansleepproject-rr | [
"region:us"
]
| 2023-11-08T19:14:07+00:00 | {"dataset_info": {"features": [{"name": "rr_intervals", "sequence": "float64"}, {"name": "patient_id", "dtype": "string"}, {"name": "sleep_stage", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 383327073, "num_examples": 469}], "download_size": 19983035, "dataset_size": 383327073}} | 2023-11-08T19:14:19+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "humansleepproject-rr"
More Information needed | [
"# Dataset Card for \"humansleepproject-rr\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"humansleepproject-rr\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"humansleepproject-rr\"\n\nMore Information needed"
]
|
ff453c31ea1dceceed19066618bb7aac4d93d74d | # Dataset Card for "bw_spec_cls_80_20"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_20 | [
"region:us"
]
| 2023-11-08T19:14:19+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "51992", "1": "51998", "2": "51999", "3": "52000", "4": "52001", "5": "52034", "6": "52035", "7": "52036", "8": "52037", "9": "52039", "10": "52040", "11": "52041", "12": "52042", "13": "52044", "14": "52045", "15": "52118", "16": "52119", "17": "52120", "18": "52121", "19": "52122", "20": "52123", "21": "52124", "22": "52125", "23": "52126", "24": "52127", "25": "52128", "26": "52129", "27": "52141", "28": "52409", "29": "52446", "30": "52447", "31": "52448", "32": "52449", "33": "52451", "34": "52500", "35": "52501", "36": "52502", "37": "52508", "38": "52522", "39": "52579", "40": "52628", "41": "52629", "42": "52630", "43": "52631", "44": "52632", "45": "52633", "46": "52634", "47": "52635", "48": "52636", "49": "52637", "50": "52638", "51": "52639", "52": "52641", "53": "52642", "54": "52644", "55": "52645", "56": "52646", "57": "52647", "58": "52648", "59": "52649", "60": "52650", "61": "52859", "62": "52860", "63": "52861", "64": "52862", "65": "53152", "66": "53154", "67": "53156", "68": "53157", "69": "53158", "70": "53159", "71": "53160", "72": "53299", "73": "53300", "74": "53301", "75": "53302", "76": "53379", "77": "53381", "78": "53457", "79": "53496"}}}}], "splits": [{"name": "train", "num_bytes": 86767340.8, "num_examples": 1600}], "download_size": 86074372, "dataset_size": 86767340.8}} | 2023-11-08T19:14:27+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_20"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_20\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_20\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_20\"\n\nMore Information needed"
]
|
dfc50b6d3d5fdb9b88c6fae57c93cac61e307ebe | # Dataset Card for "id_card_class_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | erikaxenia/id_card_class_v2 | [
"region:us"
]
| 2023-11-08T19:24:36+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "int64"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 151698325.0, "num_examples": 529}, {"name": "test", "num_bytes": 16112586.0, "num_examples": 58}], "download_size": 165957552, "dataset_size": 167810911.0}} | 2023-11-08T19:24:54+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "id_card_class_v2"
More Information needed | [
"# Dataset Card for \"id_card_class_v2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"id_card_class_v2\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"id_card_class_v2\"\n\nMore Information needed"
]
|
f295e3eaeafdadf1bc804570af0509715551fb19 | # Dataset Card for "bw_spec_cls_80_21"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arieg/bw_spec_cls_80_21 | [
"region:us"
]
| 2023-11-08T19:38:40+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "53576", "1": "53578", "2": "53591", "3": "53592", "4": "53675", "5": "53723", "6": "53724", "7": "53725", "8": "53726", "9": "53727", "10": "53728", "11": "53729", "12": "53807", "13": "53862", "14": "53863", "15": "53937", "16": "54019", "17": "54031", "18": "54032", "19": "54033", "20": "54034", "21": "54037", "22": "54039", "23": "54061", "24": "54062", "25": "54063", "26": "54064", "27": "54149", "28": "54150", "29": "54151", "30": "54152", "31": "54153", "32": "54154", "33": "54155", "34": "54156", "35": "54158", "36": "54159", "37": "54160", "38": "54163", "39": "54234", "40": "54235", "41": "54236", "42": "54237", "43": "54297", "44": "54335", "45": "54365", "46": "54376", "47": "54433", "48": "54436", "49": "54437", "50": "54438", "51": "54442", "52": "54443", "53": "54475", "54": "54476", "55": "54479", "56": "54480", "57": "54481", "58": "54482", "59": "54496", "60": "54568", "61": "54570", "62": "54576", "63": "54578", "64": "54580", "65": "54621", "66": "54623", "67": "54624", "68": "54625", "69": "54626", "70": "54662", "71": "54664", "72": "54665", "73": "54666", "74": "54667", "75": "54719", "76": "54735", "77": "54753", "78": "54874", "79": "54942"}}}}], "splits": [{"name": "train", "num_bytes": 87811337.6, "num_examples": 1600}], "download_size": 87587637, "dataset_size": 87811337.6}} | 2023-11-08T19:38:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bw_spec_cls_80_21"
More Information needed | [
"# Dataset Card for \"bw_spec_cls_80_21\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bw_spec_cls_80_21\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bw_spec_cls_80_21\"\n\nMore Information needed"
]
|
aa5a163b9ae449214f86a7cebd258d19c8387362 | # Dataset Card for "mind_recommendations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jiuyuan/mind_recommendations | [
"region:us"
]
| 2023-11-08T19:47:08+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4736767, "num_examples": 993}], "download_size": 1164035, "dataset_size": 4736767}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-08T19:47:10+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mind_recommendations"
More Information needed | [
"# Dataset Card for \"mind_recommendations\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mind_recommendations\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mind_recommendations\"\n\nMore Information needed"
]
|
3eb1c7e1fb3f899f40b5c36367f252f8be2df358 | # Dataset Card for "wikilingua_data-xlsum_gptextsum_results"
rouge= {'rouge1': 0.3230756314331615, 'rouge2': 0.12295752023585965, 'rougeL': 0.23099240967982115, 'rougeLsum': 0.23099240967982115}
bert= {'precision': 0.7382304361929877, 'recall': 0.7454116297765061, 'f1': 0.7414375136205958} | arthurmluz/xlsum_data-xlsum_gptextsum_results | [
"region:us"
]
| 2023-11-08T19:56:20+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}], "splits": [{"name": "validation", "num_bytes": 26244213, "num_examples": 7175}], "download_size": 15951725, "dataset_size": 26244213}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-08T21:19:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "wikilingua_data-xlsum_gptextsum_results"
rouge= {'rouge1': 0.3230756314331615, 'rouge2': 0.12295752023585965, 'rougeL': 0.23099240967982115, 'rougeLsum': 0.23099240967982115}
bert= {'precision': 0.7382304361929877, 'recall': 0.7454116297765061, 'f1': 0.7414375136205958} | [
"# Dataset Card for \"wikilingua_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.3230756314331615, 'rouge2': 0.12295752023585965, 'rougeL': 0.23099240967982115, 'rougeLsum': 0.23099240967982115}\n\nbert= {'precision': 0.7382304361929877, 'recall': 0.7454116297765061, 'f1': 0.7414375136205958}"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"wikilingua_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.3230756314331615, 'rouge2': 0.12295752023585965, 'rougeL': 0.23099240967982115, 'rougeLsum': 0.23099240967982115}\n\nbert= {'precision': 0.7382304361929877, 'recall': 0.7454116297765061, 'f1': 0.7414375136205958}"
]
| [
6,
132
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"wikilingua_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.3230756314331615, 'rouge2': 0.12295752023585965, 'rougeL': 0.23099240967982115, 'rougeLsum': 0.23099240967982115}\n\nbert= {'precision': 0.7382304361929877, 'recall': 0.7454116297765061, 'f1': 0.7414375136205958}"
]
|
8440b894d6658d2cf7a56b0e57c36dc3ec4527af | # Dataset Card for "context_extension-mistral-7k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sade-adrien/context_extension-mistral-7k | [
"region:us"
]
| 2023-11-08T20:20:45+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "meta", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "label", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1627254691, "num_examples": 8774}, {"name": "val", "num_bytes": 176513698, "num_examples": 975}], "download_size": 782669168, "dataset_size": 1803768389}} | 2023-11-08T20:21:36+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "context_extension-mistral-7k"
More Information needed | [
"# Dataset Card for \"context_extension-mistral-7k\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"context_extension-mistral-7k\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"context_extension-mistral-7k\"\n\nMore Information needed"
]
|
d7272fab8bb6d4f36369f5e4453a9dd53402f7fb | # Dataset Card for "paradetox-lowerCase"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | HamdanXI/paradetox-lowerCase | [
"region:us"
]
| 2023-11-08T20:24:26+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "en_toxic_comment", "dtype": "string"}, {"name": "en_neutral_comment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2149920, "num_examples": 19744}], "download_size": 1230203, "dataset_size": 2149920}} | 2023-11-08T20:35:09+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "paradetox-lowerCase"
More Information needed | [
"# Dataset Card for \"paradetox-lowerCase\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"paradetox-lowerCase\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"paradetox-lowerCase\"\n\nMore Information needed"
]
|
d8862215b48a5127db9b2c17b1fdadfbed14885d | # Dataset Card for "ca332ef1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | result-kand2-sdxl-wuerst-karlo/ca332ef1 | [
"region:us"
]
| 2023-11-08T20:31:08+00:00 | {"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 192, "num_examples": 10}], "download_size": 1372, "dataset_size": 192}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-08T20:31:09+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ca332ef1"
More Information needed | [
"# Dataset Card for \"ca332ef1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ca332ef1\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ca332ef1\"\n\nMore Information needed"
]
|
18e054b2f221eeff9b03e627f1314c3d411d76cd |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | Cabgar/compendio_v1 | [
"region:us"
]
| 2023-11-08T20:31:53+00:00 | {} | 2023-11-08T20:36:54+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
07d81dda104f8cdc6c239a29d2b6bc3d5043104c | # Dataset Card for "paradetox-preprocess"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | HamdanXI/paradetox-preprocess | [
"region:us"
]
| 2023-11-08T20:48:24+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "en_toxic_comment", "dtype": "string"}, {"name": "en_neutral_comment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2137058, "num_examples": 19744}], "download_size": 1217740, "dataset_size": 2137058}} | 2023-11-08T21:01:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "paradetox-preprocess"
More Information needed | [
"# Dataset Card for \"paradetox-preprocess\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"paradetox-preprocess\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"paradetox-preprocess\"\n\nMore Information needed"
]
|
276063fa87505aa21df697787e46b83f5de3312c | # Dataset Card for "vi_text_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dinhbinh161/vi_text_2 | [
"region:us"
]
| 2023-11-08T21:06:19+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51531390, "num_examples": 528489}], "download_size": 29226757, "dataset_size": 51531390}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-08T21:09:36+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "vi_text_2"
More Information needed | [
"# Dataset Card for \"vi_text_2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"vi_text_2\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"vi_text_2\"\n\nMore Information needed"
]
|
9e08c52cbb4f0b105902c25f12689d4947fe34d0 | # Dataset Card for "paradetox-preprocess-editOps"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | HamdanXI/paradetox-preprocess-editOps | [
"region:us"
]
| 2023-11-08T21:13:50+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "en_toxic_comment", "dtype": "string"}, {"name": "en_neutral_comment", "dtype": "string"}, {"name": "edit_ops", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 4628797, "num_examples": 19744}], "download_size": 1848112, "dataset_size": 4628797}} | 2023-11-08T21:19:02+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "paradetox-preprocess-editOps"
More Information needed | [
"# Dataset Card for \"paradetox-preprocess-editOps\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"paradetox-preprocess-editOps\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"paradetox-preprocess-editOps\"\n\nMore Information needed"
]
|
2f70272b6182eb0ace68dd2a27c9da6a4be5f055 | # Dataset Card for "wikilingua_data-xlsum_gptextsum_results"
rouge= {'rouge1': 0.22790970025682958, 'rouge2': 0.056972907672140194, 'rougeL': 0.16139060371829636, 'rougeLsum': 0.16139060371829636}
bert= {'precision': 0.7055682943703862, 'recall': 0.7017609257422118, 'f1': 0.7028708778125555} | arthurmluz/wikilingua_data-xlsum_gptextsum_results | [
"region:us"
]
| 2023-11-08T21:19:09+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}], "splits": [{"name": "validation", "num_bytes": 21944405, "num_examples": 8165}], "download_size": 12871215, "dataset_size": 21944405}} | 2023-11-08T21:19:41+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "wikilingua_data-xlsum_gptextsum_results"
rouge= {'rouge1': 0.22790970025682958, 'rouge2': 0.056972907672140194, 'rougeL': 0.16139060371829636, 'rougeLsum': 0.16139060371829636}
bert= {'precision': 0.7055682943703862, 'recall': 0.7017609257422118, 'f1': 0.7028708778125555} | [
"# Dataset Card for \"wikilingua_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.22790970025682958, 'rouge2': 0.056972907672140194, 'rougeL': 0.16139060371829636, 'rougeLsum': 0.16139060371829636}\n\nbert= {'precision': 0.7055682943703862, 'recall': 0.7017609257422118, 'f1': 0.7028708778125555}"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"wikilingua_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.22790970025682958, 'rouge2': 0.056972907672140194, 'rougeL': 0.16139060371829636, 'rougeLsum': 0.16139060371829636}\n\nbert= {'precision': 0.7055682943703862, 'recall': 0.7017609257422118, 'f1': 0.7028708778125555}"
]
| [
6,
134
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"wikilingua_data-xlsum_gptextsum_results\"\n\nrouge= {'rouge1': 0.22790970025682958, 'rouge2': 0.056972907672140194, 'rougeL': 0.16139060371829636, 'rougeLsum': 0.16139060371829636}\n\nbert= {'precision': 0.7055682943703862, 'recall': 0.7017609257422118, 'f1': 0.7028708778125555}"
]
|
f7719a87e269cbe5b21bf12db736e926d0cbe43f | # Dataset Card for "paradetox-preprocess-maskedComments"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | HamdanXI/paradetox-preprocess-maskedComments | [
"region:us"
]
| 2023-11-08T21:31:01+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "en_toxic_comment", "dtype": "string"}, {"name": "en_neutral_comment", "dtype": "string"}, {"name": "edit_ops", "sequence": {"sequence": "string"}}, {"name": "masked_comment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6126021, "num_examples": 19744}], "download_size": 2488196, "dataset_size": 6126021}} | 2023-11-08T21:31:03+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "paradetox-preprocess-maskedComments"
More Information needed | [
"# Dataset Card for \"paradetox-preprocess-maskedComments\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"paradetox-preprocess-maskedComments\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"paradetox-preprocess-maskedComments\"\n\nMore Information needed"
]
|
0af9ee2341a8e03c00d4ef3cee1eca84b0fc0bb8 | # Dataset Card for "10k_bench"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Farsight-AI/10k_bench | [
"region:us"
]
| 2023-11-08T21:35:47+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "cik", "dtype": "int64"}, {"name": "context", "dtype": "string"}, {"name": "filingDate", "dtype": "timestamp[s]"}, {"name": "docID", "dtype": "string"}, {"name": "generatedQuestion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 100350, "num_examples": 130}], "download_size": 56969, "dataset_size": 100350}} | 2023-11-08T21:37:44+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "10k_bench"
More Information needed | [
"# Dataset Card for \"10k_bench\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"10k_bench\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"10k_bench\"\n\nMore Information needed"
]
|
6477e453f770201411ed948a7aa2305373a1a0f6 | # Dataset Card for "paradetox-preprocess-maskedComments-without-INSERT"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | HamdanXI/paradetox-preprocess-maskedComments-without-INSERT | [
"region:us"
]
| 2023-11-08T21:38:13+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "en_toxic_comment", "dtype": "string"}, {"name": "en_neutral_comment", "dtype": "string"}, {"name": "edit_ops", "sequence": {"sequence": "string"}}, {"name": "masked_comment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5935752, "num_examples": 19744}], "download_size": 2434093, "dataset_size": 5935752}} | 2023-11-08T22:21:40+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "paradetox-preprocess-maskedComments-without-INSERT"
More Information needed | [
"# Dataset Card for \"paradetox-preprocess-maskedComments-without-INSERT\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"paradetox-preprocess-maskedComments-without-INSERT\"\n\nMore Information needed"
]
| [
6,
26
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"paradetox-preprocess-maskedComments-without-INSERT\"\n\nMore Information needed"
]
|
51a01b061f2fe8e2f878a218af855a3a53e4b0de | # Dataset Card for "PubChem10M_Canonicalized_SELFIES"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jarod0411/PubChem10M_Canonicalized_SELFIES | [
"region:us"
]
| 2023-11-08T21:40:18+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "smiles", "dtype": "string"}, {"name": "selfies", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2238384519.927304, "num_examples": 8999273}, {"name": "validation", "num_bytes": 248620852.6737807, "num_examples": 999913}], "download_size": 0, "dataset_size": 2487005372.6010847}} | 2023-11-09T04:31:54+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "PubChem10M_Canonicalized_SELFIES"
More Information needed | [
"# Dataset Card for \"PubChem10M_Canonicalized_SELFIES\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"PubChem10M_Canonicalized_SELFIES\"\n\nMore Information needed"
]
| [
6,
24
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"PubChem10M_Canonicalized_SELFIES\"\n\nMore Information needed"
]
|
517e22bd9e512ad108c573fd22ef97e79672a587 | # MADLAD-400
## Dataset and Introduction
[MADLAD-400 (*Multilingual Audited Dataset: Low-resource And Document-level*)](https://arxiv.org/abs/2309.04662) is
a document-level multilingual dataset based on Common Crawl, covering 419
languages in total. This uses all snapshots of CommonCrawl available as of August
1, 2022. The primary advantage of this dataset over similar datasets is that it
is more multilingual (419 languages), it is audited and more highly filtered,
and it is document-level. The main disadvantage is also its strength -- being
more filtered, it may lack the recall needed for some applications.
There are two versions released: the **noisy** dataset, which has no filtering
except document-level LangID, and the **clean** dataset, which has a variety of
filters applied, though it naturally has a fair amount of noise itself. Each
dataset is released in a document-level form that has been deduplicated.
## Loading
You can load both the clean and noisy versions of any language by specifing its LangID:
~~~
madlad_abt = load_dataset("allenai/madlad-400", "abt")
~~~
A list of langagues can also be supplied with a keyword argument:
~~~
madlad_multilang = load_dataset("allenai/madlad-400", languages=["abt", "ace"])
~~~
Additionally, you can load the noisy and clean subsets seperately with the split keyword argument:
~~~
madlad_multilang_clean = load_dataset("allenai/madlad-400", languages=["abt", "ace"], split="clean")
~~~
## LangID model and Crawl
Following [Language Id In the Wild](https://arxiv.org/pdf/2010.14571.pdf), we
trained a Semi-Supervised LangId model (SSLID) on 500 languages. The training
data is as described in that paper, with the differences that 1) training data
is sampled to a temperature of `T=3` to reduce over-triggering on low-resource
languages; and 2) the data is supplemented with web-crawled data from the same
paper (that has already been through the various filters described therein) in
the hopes that it will increase robustness to web-domain text.
## Filtering
Before separating the raw CommonCrawl corpus by LangID, these
filtering steps are done, similar to Raffel et al (2020):
- Discarded any page with fewer than 5 sentences and only retained lines that
contained at least 3 words.
- Removed any line with the word Javascript.
- Removed any page where the phrase “lorem ipsum” appeared.
- Removed any pages containing the phrases "terms of use", "privacy policy",
"cookie policy", "uses cookies", "use of cookies", "use cookies"
- Removed any pages that contained a curly bracket.
- To deduplicate the data set, discarded all but one of any three-sentence span occurring more than once in the data set.
The `noisy` subset of the data was filtered only by document-level LangID, which
was taken to be the majority sentence-level LangID prediction. The `clean`
subset removed all documents with a `percent_questionable` score greater than
20%. It furthermore removed any document with under 5 sentences.
The `pct_questionable` score is simple the percentage of sentences in the input
document that were "questionable". A sentence was considered questionable if any
of the following were true:
* **LangID Consistency:** the sentence-level LangID does not match the
document-level LangID
* **List Case:** The sentence has at least 12 tokens, and over 50% percent of
the tokens began in a capital letter.
* **Length:** The sentence has under 20 characters or over 500 characters
(note: this is a bad heuristic for ideographic languages)
* **Danger Chars:** Over 20% of the characters in the sentence match
`[0-9{}+/()>]`
* **Cursedness:** The sentence matches a cursed regex (see below)
### Cursed Substrings
Based on the initial round of data audits, the authors created a heuristic list of
substrings and regexes accounting for a large amount of questionable content.
Keep in mind that these all are fed into the `pct_questionable` score -- a
sentence is only excluded from the `clean` dataset if over 20% of the sentences
in that document are flagged as questionable.
notes about cursed substrings:
* low quality sentences ending in the pipe character were very common. Before
you ask, this was not Devanagari-script text using a Danda.
* The last few regexes are meant to match `A N T S P E A K`, `List Case`, and
weirdly regular text (for instance, lists of shipping labels or country
codes)
```
# this implementation is for demonstration and is pretty inefficient;
# to speed it up, use string inclusion (`in`) instead of regex for all but the
# last four, and for those use a compiled regex.
def is_cursed(s):
return any(re.findall(curse, s) in s for curse in CURSED_SUBSTRINGS)
CURSED_SUBSTRINGS = [" №", "���", "\\|\\s*$", " nr\\.$", "aute irure dolor ", " sunt in culpa qui ", "orem ipsum ", " quis nostrud ", " adipisicing ", " dolore eu ", " cupidatat ", "autem vel eum", "wisi enim ad", " sex ", " porn ", "黄色电影", "mp3", "ownload", "Vol\\.", " Ep\\.", "Episode", " г\\.\\s*$", " кг\\.\\s*$", " шт\\.", "Develop", "Facebook", " crusher ", " xxx ", " ... ... ... ... ... ... ... ... ...", " .... .... .... .... .... .... .... .... ....", " [^ ] [^ ] [^ ] [^ ] [^ ] [^ ] [^ ] [^ ] [^ ]", ", ..,,? ..,,? ..,,? ..,,?"]
```
### Virama Correction
Many languages using Brahmic Abugida (South and Southeast Asian scripts like
Devanagari, Khmer, etc.) use some variant on the virama character. For whatever
reason, it was found that this character was often messed up in the common crawl
snapshots used. Therefore, for the languages `bn my pa gu or ta te kn ml
si th tl mn lo bo km hi mr ne gom as jv dv bho dz hne ks_Deva mag mni shn yue zh
ja kjg mnw ksw rki mtr mwr xnr`, a special correction step was done.
For these languages, the authors took the list of all virama characters and removed all
unnecessary spaces between each instance of a virama character and the next
character with a regex.
```
'%s' % regex.sub(r' ([%s]) ' % _VIRAMA_CHARS, '\\1', x)
```
### Myanmar Font Compatibility
Prior to 2019, the most popular font for Burmese websites was the Zawgyi font.
The authors used [Myanmar Tools](https://github.com/google/myanmar-tools) to convert text.
Several scripts, like the Chinese script, Tibetan script, and Thai, do not use
whitespace to separate characters. The languages with this property in this
dataset are `yue zh ja th lo kjg mnw my shn ksw rki km bo dz`.
Alas, the **Length** aspect of the `pct_questionable` score was calculated using
simplistic whitespace tokenization, and therefore rendered the whole
`pct_questionable` score invalid for those languages. Therefore, for these
languages, the "clean" data is identical to the "noisy" data (barring Chinese;
see below.)
### Special filters
Chinese had a particular issue with pornographic content. After manual inspection
a list of strings likely to be present in pornographic content was developed. All
pages containing at least one of these strings were removed. Resulted in 17%
reduction in number of documents and 56% reduction in file size.
```
pornsignals = "caoporn caoprom caopron caoporen caoponrn caoponav caopom caoorn 99re dy888 caopro hezyo re99 4438x zooskool xfplay 7tav xxoo xoxo 52av freexx 91chinese anquye cao97 538porm 87fuli 91pron 91porn 26uuu 4438x 182tv kk4444 777me ae86 91av 720lu yy6080 6080yy qqchub paa97 aiai777 yy4480 videossexo 91free 一级特黄大片 偷拍久久国产视频 日本毛片免费视频观看 久久免费热在线精品 高清毛片在线看 日本毛片高清免费视频 一级黄色录像影片 亚洲男人天堂 久久精品视频在线看 自拍区偷拍亚洲视频 亚洲人成视频在线播放 色姑娘综合站 丁香五月啪啪 在线视频成人社区 亚洲人成视频在线播放 久久国产自偷拍 一本道 大香蕉无码 香港经典三级 亚洲成在人线免费视频 天天色综合网 大香蕉伊人久草 欧美一级高清片 天天鲁夜夜啪视频在线 免费黄片视频在线观看 加比勒久久综合 久草热久草在线视频 韩国三级片大全在线观看 青青草在线视频 美国一级毛片 久草在线福利资源 啪啪啪视频在线观看免费 成人福利视频在线观看 婷婷我去也 老司机在线国产 久久成人视频 手机看片福利永久国产 高清国产偷拍在线 大香蕉在线影院 日本高清免费一本视频 男人的天堂东京热 影音先锋男人资源 五月婷婷开心中文字幕 亚洲香蕉视频在线播放 天天啪久久爱视频精品 超碰久久人人摸人人搞".split()
```
A few more random notes, comparing to common alternative codes for these
languages:
* `fil` for Filipino/Tagalog, not `tl`
* `ak` for Twi/Akan, rather than `tw`. This includes Fante.
* Unfortunately use the macro code `chm` for Meadow Mari (instead of the
correct `mhr`), and `mrj` for Hill Mari
* `no` for Norwegian Bokmål, whereas some resources use
`nb`
* `ps` for Pashto instead of `pbt` (Southern Pashto)
* `ms` for Standard Malay, not `zlm`
* `sq` for Albanian, and don't distinguish dialects like
Gheg (`aln`) and Tosk (`als`)
* `ber` as the code for Tamazight, after consultation with Tamazight
speakers opining that the dialect distinctions are not significant. Other
resources use the individual codes like `tzm` and `kab`.
* Macrocode `qu` for Quechua. In practice, this seems usually to be
a mix of the Ayacucho and Cusco dialects. Other resources, like NLLB, may
use the dialect code, e.g. `quy` for Ayacucho Chanka. The same is true for a
few other macro codes, like `ff` (Macro code for Fulfulde, whereas other
sources may use e.g. `fuv`.)
* Really, there are notes that can be made about almost any code, from the
well-accepted conventions like `zh` for Mandarin, to many dialectical notes,
like which variant of Hmong really is the `hmn` data? But the above ones are
made specifically for ones where the authors are aware of other datasources floating
out there that use different conventions.
## Audit
Following [Quality at a Glance](https://arxiv.org/abs/2103.12028), the authors performed
an "audit" of every corpus in this dataset. Although the authors did not speak most
languages, they were able to give high-level comments on the general quality. They
looked at a sample of 20 documents of each language.
After an initial round of auditing, they devised a new set of filters and applied
them. They then re-did all audits.
### Overall notes from the audit
The decision was to **include languages that looked noisy, but omit any language
that was clearly majority noise, or only had 20 or fewer docs.** This is a low
bar -- twenty documents can be very little indeed, and some of the corpora released are quite noisy, but all of them should have at least the potential to
be used in some useful way. The motivation for not releasing nonsense or tiny
datasets is to not give a false sense of how multilingual this dataset actually
is ("Representation washing"), as recommended by **Quality at a Glance**.
A few overarching points:
* Many low-resource languages only had Bible text, or in some cases jw.org
data. These are marked in the rows below. Generally `ok bible` means that
100% of the audited sentences were Biblical, whereas if `bible` is simply
mentioned in the note, it was not the only source of data.
* Indian languages in the Latin script had a high concentration of
pornographic content.
### Renames and Merges as a result of the Audit
In several cases, it was clear from the audit that the corpora were not in the
languages that the LangID model claimed they were. This led to the following
renames:
* dty renamed to `zxx-xx-dtynoise`, aka a "language" of noise. This is mainly
mis-rendered PDFs and may have some practical applications for decoding
said.
* `fan` renamed to `bum`
* `ss-SZ` renamed to `ss` -- this was just a result of us having inconsistent
data labels.
* `cjk` merged into the `gil` dataset
* `bjj` merged into the `awa` dataset
## Canaries
Canaries are provided in separate `canaries` folder. Canaries are organized into three directions: `monolingual` hosts canaries designed for the MADLAD-400 monody data, `multiway` for the multiway data, and `generic` the generic canaries generated only from the model's vocabulary.
* Monolingual: Canaries here are organized by the language the canary was generated from. This corresponds exactly to the `translate_copy` setting in the paper, where the source and target language match.
* Multiway: Canaries here are organized in one of two fashions. `to_XX` indicates canaries organized by the target language (and where the source language could be any language). `XX-XX` indicates the canaries (interleaved_both and interleaved_mislabeled_both) designed for a specific pair of languages.
Within each subdirectory above, canaries are into separate files named by the canary type. There is always only a single file for each canary type. The `generic` folder contains within it the four canary types.
Canaries can be mixed in with normal training data to then be analyzed post-hoc to training
## References
Raffel, Colin, et al. "Exploring the limits of transfer learning with a unified
text-to-text transformer." J. Mach. Learn. Res. 21.140 (2020): 1-67.
## Contact
Please reach out to {snehakudugunta, icaswell}꩜google.com. For questions about the canaries, reach out to [email protected]
## License
This data is released with the `CC-BY-4.0` license.
## Detailed notes from the audit
Here are the notes on all languages, along with the number of documents
found, and the final decision made with respect to including the language in
this dataset.
| Lang. | note | N | decision |
| --------------- | ------------------------ | ---------- | --------------- |
| en | ok | 1838712272 | keep |
| ru | ok | 402458746 | keep |
| es | good | 250906994 | keep |
| de | ok | 225111495 | keep |
| fr | ok | 218863911 | keep |
| it | ok | 126406256 | keep |
| pt | ok | 124207090 | keep |
| pl | ok | 90908786 | keep |
| nl | ok | 86594116 | keep |
| tr | ok | 56417359 | keep |
| vi | ok | 54988654 | keep |
| cs | ok | 38254671 | keep |
| id | ok | 37979244 | keep |
| ro | ok | 35397563 | keep |
| sv | ok. Also the last | 35153050 | keep |
: : language (suz) is "ok : : :
: : bible" : : :
| hu | ok | 29677075 | keep |
| uk | ok | 24968305 | keep |
| fa | idk ask a farsi speaker; | 23138888 | keep |
: : ALI\: OK : : :
| ja | ok a little en mixed in | 21818123 | keep |
| el | ok | 20932239 | keep |
| fi | ok | 20433664 | keep |
| da | ok | 17865888 | keep |
| th | ok | 17439979 | keep |
| no | ok | 14864710 | keep |
| bg | ok | 12755329 | keep |
| ko | ok | 12653878 | keep |
| ar | good | 12411641 | keep |
| sk | ok | 11857945 | keep |
| ca | ok | 9477390 | keep |
| lt | ok | 8748025 | keep |
| iw | ok | 7194574 | keep |
| sl | ok | 6310419 | keep |
| et | ok | 5542933 | keep |
| lv | ok | 5007982 | keep |
| hi | ok some porn | 4512205 | keep |
| sq | good | 3622957 | keep |
| az | good | 3256331 | keep |
| hr | ok | 2841400 | keep |
| ta | ok | 2594191 | keep |
| ms | ok | 2337672 | keep |
| ml | ok | 2072605 | keep |
| sr | ok | 2010607 | keep |
| kk | ok | 1810963 | keep |
| te | ok a lot of weirdly low | 1682441 | keep |
: : quality looking content : : :
: : like commerce : : :
| mr | ok fix virama | 1673848 | keep |
| is | ok | 1560913 | keep |
| bs | good | 1362582 | keep |
| mk | ok | 1358293 | keep |
| gl | ok | 1253170 | keep |
| eu | ok | 1155671 | keep |
| bn | ok | 1138848 | keep |
| be | ok | 1092785 | keep |
| ka | ok | 936497 | keep |
| fil | ok more bible than | 901507 | keep |
: : expected for such a : : :
: : major language : : :
| mn | ok mongolian cyrillic | 879878 | keep |
| af | good | 868671 | keep |
| uz | ok some cyrllic noise | 669909 | keep |
| gu | ok | 659727 | keep |
| kn | ok | 657846 | keep |
| kaa | ok cyrllic | 586361 | keep |
| sw | ok | 537847 | keep |
| ur | ok | 467236 | keep |
| ne | ok | 453349 | keep |
| cy | ok; was terrible before | 430719 | keep |
: : filtering short docs : : :
| hy | ok | 397523 | keep |
| ky | ok | 367577 | keep |
| si | good | 349220 | keep |
| tt | good plus some | 346927 | keep |
: : nonunicode misrendered : : :
: : PDF : : :
| tg | good | 328194 | keep |
| la | ok some broken chars | 319178 | keep |
| so | good | 293218 | keep |
| ga | ok some en noise | 285999 | keep |
| km | ook | 285740 | keep |
| mt | ok | 265388 | keep |
| eo | ok; likely a lot of Mt | 259971 | keep |
| ps | ok | 252888 | keep |
| rw | ok | 226466 | keep |
| ku | ok | 218850 | keep |
| lo | ok many entities in | 215982 | keep |
: : latin script : : :
| fy | ok plausible but i bet | 210025 | keep |
: : there is a lot of nl in : : :
: : there : : :
| ha | ok | 173485 | keep |
| my | filter noise and en fix | 172401 | keep |
: : virama : : :
| dv | good | 167179 | keep |
| pa | ok | 150588 | keep |
| ckb | ok | 148870 | keep |
| lb | ok | 145988 | keep |
| mg | ok some bible jw | 115387 | keep |
| ht | ok | 110443 | keep |
| ug | ok | 106549 | keep |
| am | good | 106301 | keep |
| or | ok | 100530 | keep |
| fo | good | 97754 | keep |
| gd | ok | 94275 | keep |
| ba | ok | 90318 | keep |
| tk | ok; a few weird docs | 82495 | keep |
| mi | ok | 79509 | keep |
| hmn | ok | 75213 | keep |
| grc | ok some bible | 70730 | keep |
| jv | ok | 69473 | keep |
| ceb | ok | 66164 | keep |
| sd | good | 65858 | keep |
| yi | ok | 64949 | keep |
| kaa-Latn | ok urls are .ru or .kz | 61169 | keep |
| sn | ok | 60196 | keep |
| co | ok;l i suspect lots of | 55387 | keep |
: : MT : : :
| su | good | 54968 | keep |
| pap | ok | 54498 | keep |
| ig | ok | 54410 | keep |
| zu | good | 53809 | keep |
| xh | ok | 53672 | keep |
| sm | ok | 52614 | keep |
| ny | ok | 52244 | keep |
| yo | ok | 52067 | keep |
| cv | good | 47318 | keep |
| el-Latn | good; a lot of old | 46428 | keep |
: : content! : : :
| kl | ok | 46027 | keep |
| haw | ok scam tv products | 45670 | keep |
| gsw | wtf is happening here; | 42712 | keep |
: : keep with disclaimer; : : :
: : STILL BOILERPLATE : : :
| tet | good ; actually a lot of | 40367 | keep |
: : fun data! : : :
| st | ok | 40360 | keep |
| lus | ok | 36437 | keep |
| oc | ok | 36379 | keep |
| as | good | 33825 | keep |
| rm | ok | 33805 | keep |
| br | ok after shortfilter | 33219 | keep |
| sah | ok | 29169 | keep |
| hi-Latn | filter porn this is half | 26723 | keep |
: : porn : : :
| se | good | 23872 | keep |
| cnh | good, some local news! | 21556 | keep |
: : not sure if WL : : :
| om | ok | 18895 | keep |
| ce | ok | 14968 | keep |
| udm | ok | 13376 | keep |
| lg | ok lot of | 13030 | keep |
: : www.bukedde.co.ug in : : :
: : this : : :
| os | ok | 12623 | keep |
| nv | ok | 12578 | keep |
| kha | ok | 12070 | keep |
| ilo | ok some bible | 11754 | keep |
| ctd-Latn | ok; from some local | 11629 | keep |
: : news? : : :
| vec | very noisy has wiki from | 11108 | keep |
: : other langs and .it : : :
: : websites so not sure if : : :
: : vec : : :
| hil | ok some en boilerplate | 10564 | keep |
| tyv | ok fun stuff plus some | 9083 | keep |
: : russian noise i think : : :
| iba | ok jw data | 7638 | keep |
| ru-Latn | ok | 7523 | keep |
| kbd | ok many .ru | 7486 | keep |
| ti | ok; poor tigray | 7288 | keep |
| sa | ok | 7117 | keep |
| av | good | 6331 | keep |
| bo | needs some serious | 6226 | keep |
: : script filtering. but : : :
: : there is some ok data in : : :
: : there. : : :
| zza | good | 6019 | keep |
| ber-Latn | ok | 5612 | keep |
| otq | ok | 5554 | keep |
| te-Latn | great good text....but | 5305 | keep |
: : mostly pornographic : : :
| bua | ok | 5264 | keep |
| ts | good | 5198 | keep |
| cfm | ok mostly from | 4858 | keep |
: : chinland.co : : :
| tn | good | 4821 | keep |
| krc | ok | 4815 | keep |
| ak | good; much but not all | 4768 | keep |
: : bible : : :
| meo | ok mostly blogs | 4655 | keep |
| chm | ok; fyi watch out for | 4653 | keep |
: : yandex translationese : : :
| to | good ; news bible | 4612 | keep |
: : government : : :
| ee | good; mostly religious | 4536 | keep |
| nso | ok | 4422 | keep |
| ady | good | 4206 | keep |
| rom | bible | 4187 | keep |
| bho | mostly from anjoria.com. | 4121 | keep |
: : Looks like valid : : :
: : Bhojpuri. : : :
| ltg | ok mostly www.lakuga.lv | 4120 | keep |
| fj | ok | 3976 | keep |
| yua | ok | 3965 | keep |
| gn | ok some broken | 3858 | keep |
: : characters some bible : : :
| az-RU | good; a lot of JW | 3781 | keep |
| ln | ok bible jw | 3325 | keep |
| ada | good; bible; likely | 3095 | keep |
: : mixed with gaa : : :
| myv | maybe has .ru urls | 3095 | keep |
| bik | ok. keep in mind the bik | 3092 | keep |
: : vs bcl issue. : : :
| tlh | ok, but why tf are there | 3054 | keep |
: : websites inklingon? all : : :
: : MT ? : : :
| kbp | not sure if right script | 3036 | keep |
: : wiki says latin : : :
| war | ok but v sus. Pls filter | 2928 | keep |
: : out wikipedia : : :
| wa | ok lots of wiki stuff | 2772 | keep |
| bew | mostly blogs. idk if | 2677 | keep |
: : standard Indonesian or : : :
: : not : : :
| rcf | ok | 2630 | keep |
| ta-Latn | good text .... but | 2580 | keep |
: : pornographic : : :
| kac | ok | 2567 | keep |
| iu | filter script some is en | 2537 | keep |
: : rest is iu script : : :
| ay | good; mix of bible and | 2505 | keep |
: : other news sources : : :
| kum | ok | 2495 | keep |
| qu | ok | 2449 | keep |
| bgp | almost all ur-Latn. | 2427 | keep |
: : consider removing or : : :
: : renaming : : :
| hif | ok some en noise and | 2358 | keep |
: : religious : : :
| kw | ok short boilerplate | 2324 | keep |
: : bible wiki; ok some porn : : :
| nan-Latn-TW | ok | 2285 | keep |
| srn | ok bible + jw | 2281 | keep |
| tly-IR | deeply sus | 2239 | keep |
| sg | ok jw | 2106 | keep |
| gom | ok | 2102 | keep |
| ml-Latn | ok some short docs | 2071 | keep |
| kj | ok | 2062 | keep |
| ksd | ok bible | 2000 | keep |
| dz | ok; hidden parallel | 1899 | keep |
: : text; maybe actually bo; : : :
: : mainly buddhist : : :
| kv | ok a lil boilerplate | 1878 | keep |
: : vibes : : :
| msi | ok | 1870 | keep |
| ve | ok mostly bible jw | 1866 | keep |
| zap | ok JW. | 1803 | keep |
| zxx-xx-dtynoise | BEAUTIFUL NOISE rename | 1765 | keep |
: : but keep as beautiful : : :
: : xample. (was called : : :
: : "dty") : : :
| meu | ok bible | 1728 | keep |
| iso | ok jw | 1721 | keep |
| ium | filter out zh | 1721 | keep |
| nhe | ok | 1714 | keep |
| tyz | ok bible bu again i | 1707 | keep |
: : think some mixeed : : :
: : dialects : : :
| hui | ok some bible | 1680 | keep |
| new | ok | 1634 | keep |
| mdf | ok some short docs | 1609 | keep |
| pag | bible | 1588 | keep |
| gv | filter short repetitive | 1586 | keep |
: : sentences; still same : : :
: : but keep : : :
| gag | has 1-2 cyrillic | 1572 | keep |
: : examples with small amts : : :
: : of arabic script noise : : :
| ngu | ok | 1534 | keep |
| quc | bible | 1526 | keep |
| mam | ok bible jw | 1513 | keep |
| min | ok mostly wiki and bible | 1474 | keep |
| ho | ok | 1466 | keep |
| pon | bible | 1462 | keep |
| mrj | ok | 1447 | keep |
| lu | ok jw | 1444 | keep |
| gom-Latn | ok very noisy ; some ok | 1432 | keep |
: : stuff ; release with : : :
: : disclaimer : : :
| alt | ok | 1422 | keep |
| nzi | ok | 1371 | keep |
| tzo | ok bible + jw | 1357 | keep |
| bci | ok bible | 1329 | keep |
| dtp | ok; mostly from | 1309 | keep |
: : www.newsabahtimes.com.my : : :
| abt | fine; bible | 1305 | keep |
| bbc | ok | 1274 | keep |
| pck | ok | 1255 | keep |
| mai | ok mild amounts of en | 1240 | keep |
: : noise : : :
| mps | ok bible | 1239 | keep |
| emp | ok bible | 1238 | keep |
| mgh | ok bible jw | 1222 | keep |
| tab | idk plausibly ok | 1202 | keep |
| crh | ok | 1184 | keep |
| tbz | good mostly bible but | 1126 | keep |
: : not all : : :
| ss | good mix of data ; | 1089 | keep |
: : renamed from "ss" : : :
| chk | ok bible | 1082 | keep |
| bru | ok; bible | 1072 | keep |
| nnb | ok | 1071 | keep |
| fon | ok mostly jw but not all | 1065 | keep |
| ppk | bible | 1063 | keep |
| tiv | ok jw | 1063 | keep |
| btx | ok probably | 1009 | keep |
| bg-Latn | ok | 991 | keep |
| mbt | ok bible | 969 | keep |
| ace | good; bible | 966 | keep |
| tvl | ok jw | 933 | keep |
| dov | ok bible + jw | 923 | keep |
| ach | good; bible | 915 | keep |
| xal | ok has .ru sites though | 913 | keep |
| cuk | ok bible | 899 | keep |
| kos | ok lds bible | 881 | keep |
| crs | ok | 873 | keep |
| wo | ok; mostly bible. | 871 | keep |
| bts | ok; mostly bible | 869 | keep |
| ubu | ok bible | 846 | keep |
| gym | ok biblle | 820 | keep |
| ibb | ok bible and repeated @ | 818 | keep |
| ape | good; bible | 814 | keep |
| stq | ok i think ? | 809 | keep |
| ang | much noise but some good | 803 | keep |
: : Old English in there! : : :
| enq | ok bible | 793 | keep |
| tsg | much noise but somegood | 789 | keep |
: : data too! : : :
| shn | mostly English | 788 | keep |
: : boilerplate. filter by : : :
: : latin text before : : :
: : releasing : : :
| kri | ok boilerplate noise | 786 | keep |
: : bible jw : : :
| kek | ok jw bible | 782 | keep |
| rmc | ok | 738 | keep |
| acf | good; bible | 730 | keep |
| syr | good; practictitioners | 716 | keep |
: : should keep dialect in : : :
: : mind. : : :
| qub | bible | 705 | keep |
| bm | good | 702 | keep |
| tzh | ok jw | 702 | keep |
| jiv | ok bible | 696 | keep |
| kn-Latn | filter en noise of | 688 | keep |
: : karnatake govt websites : : :
| kjh | ok .ru domain | 672 | keep |
| yap | ok | 638 | keep |
| ban | ok bible | 637 | keep |
| tuc | ok bible | 635 | keep |
| tcy | good; mostly wikipedia; | 632 | keep |
: : likely some konkani : : :
: : mixed in : : :
| cab | ok jw | 629 | keep |
| cak | ok bible | 617 | keep |
| din | ok after SD filter | 611 | keep |
| arn | good; bible | 593 | keep |
| lrc | ok | 587 | keep |
| gil | empty; but merged in | 586 | keep |
: : data in "cjk" : : :
| gil | this is all in gil | 586 | keep |
: : (Kiribati). merged into : : :
: : "gil" : : :
| rwo | bible | 572 | keep |
| hus | ok bible | 569 | keep |
| bum | ok bible; but wrong | 559 | keep |
: : language. Data is in : : :
: : Bulu, not Fang : : :
| mak | ok bible | 555 | keep |
| frp | fair amount from | 550 | keep |
: : wikipedia. : : :
| seh | ok jw | 545 | keep |
| twu | ok bible, but also i | 539 | keep |
: : think it's lots of mixed : : :
: : similar dialects : : :
| kmb | ok bible jw | 538 | keep |
| ksw | ok bible | 536 | keep |
| sja | ok bibe | 527 | keep |
| amu | good; bible; crazy | 511 | keep |
: : diacritics : : :
| mad | remove mostly short text | 509 | keep |
| quh | bible | 501 | keep |
| dyu | ok bible | 483 | keep |
| toj | ok jw | 452 | keep |
| ch | ok; not sure about WL | 449 | keep |
| sus | hella sus jk ok bible | 437 | keep |
| nog | ok | 419 | keep |
| jam | ok bible | 416 | keep |
| gui | ok bible | 409 | keep |
| nia | ok | 408 | keep |
| mas | ok some amount of bible | 405 | keep |
| bzj | ok bible | 404 | keep |
| mkn | ok bible | 402 | keep |
| lhu | ok bible | 377 | keep |
| ctu | ok bible | 366 | keep |
| kg | ok bible jw | 365 | keep |
| inb | ok bible | 343 | keep |
| guh | ok bible | 331 | keep |
| rn | bible | 323 | keep |
| bus | ok; bible; about 50bzc | 322 | keep |
| mfe | ok mostly bible maybe | 320 | keep |
: : some french creole short : : :
: : doc noise : : :
| sda | ok bible | 317 | keep |
| bi | good! fun! | 311 | keep |
| cr-Latn | noise and lorem ipsom. | 303 | keep |
: : But some ok Cree text. : : :
| gor | ok bible | 303 | keep |
| jac | ok bible | 303 | keep |
| chr | ok bible | 301 | keep |
| mh | ok jw lds | 296 | keep |
| mni | ok | 290 | keep |
| wal | ok bible + jw | 286 | keep |
| teo | ok bible | 274 | keep |
| gub | ok bible | 271 | keep |
| qvi | bible | 266 | keep |
| tdx | ok jw | 262 | keep |
| rki | ok | 251 | keep |
| djk | ok; bible+jw | 246 | keep |
| nr | ok | 246 | keep |
| zne | ok jw | 239 | keep |
| izz | ok bible | 237 | keep |
| noa | ok | 234 | keep |
| bqc | ok; bible | 228 | keep |
| srm | ok; bible + jw | 227 | keep |
| niq | ok | 226 | keep |
| bas | ok; has some fun blog | 216 | keep |
: : stuff! : : :
| dwr | ok; bible; mixed script | 215 | keep |
| guc | ok bible | 214 | keep |
| jvn | ok bible | 213 | keep |
| hvn | ok religioous text | 200 | keep |
| sxn | ok bible ; also wild | 197 | keep |
: : diacritics : : :
| koi | ok | 196 | keep |
| alz | good; bible | 195 | keep |
| nyu | ok | 195 | keep |
| bn-Latn | ok | 191 | keep |
| suz | | 186 | keep |
| pau | ok | 185 | keep |
| nij | ok | 183 | keep |
| sat-Latn | good! al from local news | 183 | keep |
: : sources : : :
| gu-Latn | filter short en | 179 | keep |
: : boilerplate and : : :
: : repetitive sentences : : :
| msm | ok bible | 177 | keep |
| maz | ok bible jw | 170 | keep |
| qxr | bible | 153 | keep |
| shp | ok bible | 150 | keep |
| hne | ok | 146 | keep |
| ktu | ok bible jw | 144 | keep |
| laj | ok bible | 144 | keep |
| pis | bible | 139 | keep |
| mag | ok fix virama issue | 138 | keep |
| gbm | ok | 137 | keep |
| tzj | ok bible | 136 | keep |
| oj | ok | 135 | keep |
| ndc-ZW | ok | 132 | keep |
| tks | ok bible bu again i | 127 | keep |
: : think some mixeed : : :
: : dialects : : :
| gvl | filter short boilerplate | 126 | keep |
: : mostly bible : : :
| knj | ok bible | 126 | keep |
| awa | all bible in awadhi | 126 | keep |
: : (awa). Renamed from bjj : : :
| spp | ok bible | 123 | keep |
| mqy | bible remove short docs | 119 | keep |
| tca | ok bible + jw | 117 | keep |
| cce | ok jw | 116 | keep |
| skr | ok; some pnb mixed in | 107 | keep |
| kmz-Latn | ok soome ar script noise | 106 | keep |
| dje | ok; mostly but not all | 100 | keep |
: : bible : : :
| gof | ok some bible | 97 | keep |
| agr | good; bible | 93 | keep |
| qvz | bible | 88 | keep |
| adh | good; bible | 87 | keep |
| quf | bible | 86 | keep |
| kjg | ok bible | 84 | keep |
| tsc | ok | 82 | keep |
| ber | ok great! | 79 | keep |
| ify | ok bible | 79 | keep |
| cbk | ok bible | 78 | keep |
| quy | bible | 78 | keep |
| ahk | good; bible; crazy | 77 | keep |
: : diacritics : : :
| cac | ok bible | 77 | keep |
| akb | good; bible | 71 | keep |
| nut | ok | 67 | keep |
| ffm | ok bible; mixed fulfulde | 65 | keep |
: : dialects; consider : : :
: : merging with ff : : :
| taj | ok bible | 65 | keep |
| ms-Arab | ok mostly utusanmelayu | 63 | keep |
: : website : : :
| brx | quite good! | 62 | keep |
| ann | good; all from wikimedia | 56 | keep |
: : incubator : : :
| qup | bible | 53 | keep |
| ms-Arab-BN | ok not sure if same as | 46 | keep |
: : ms-Arab : : :
| miq | ok | 45 | keep |
| msb | ok bible | 41 | keep |
| bim | good; bible | 40 | keep |
| raj | ok | 40 | keep |
| kwi | ok bible | 37 | keep |
| tll | ok jw | 37 | keep |
| trp | good ; lots of random | 36 | keep |
: : stuff : : :
| smt | ok bible but lots of | 34 | keep |
: : different bibles! : : :
| mrw | ok | 29 | keep |
| dln | ok bible | 28 | keep |
| qvc | bible | 27 | keep |
| doi | ok actually nice! | 26 | keep |
| ff | ok after shortfilter | 26 | keep |
| zh | very noisy | 19850947 | keep (filtered) |
| zh-Latn | poor quality | 602 | remove |
| rhg-Latn | remove | 10302 | remove |
| ja-Latn | remove maybe low quality | 7516 | remove |
: : short and repeated : : :
| pam | remove | 2773 | remove |
| za | revisit after | 1700 | remove |
: : shortfilter : : :
| ar-Latn | terrible, 0% orrect, | 1520 | remove |
: : remove : : :
| mnw | remove en noise and | 1100 | remove |
: : boilerplate : : :
| fip | ok jw ; but wrong | 729 | remove |
: : language. mostly : : :
: : Mambwe-Lungu and Bemba, : : :
: : as well as Fipu (mgr+bem : : :
: : vs. fip) : : :
| el-CY | bad; not Cypriote | 537 | remove |
| luz | terrible; remove | 354 | remove |
| cni | ok; bible; lots of mixed | 261 | remove |
: : in content in : : :
: : not,cob,cpc,arl : : :
| apd-SD | terribly questionable; | 227 | remove |
: : probably remove : : :
| mey | mostly short and noisy | 127 | remove |
: : borderline : : :
| awa | OK; should be used with | 126 | remove |
: : caution and suspicion : : :
| mtq | remove short doc | 111 | remove |
: : repetitive : : :
| mel | remove noisy en | 103 | remove |
| mr-Latn | remove mostly porn and | 91 | remove |
: : short docs : : :
| srr | remove ; english | 91 | remove |
: : boilerplate : : :
| en-Cyrl | ok ... some fr-Cyrl too | 90 | remove |
: : and maybe others : : :
| en-Arab | remove | 79 | remove |
| syl | idk maybe ok ? | 61 | remove |
| jax | filter mostly | 58 | remove |
: : text.medjugorje.ws : : :
: : boilerplate : : :
| xmm | very noisy lots of dj | 58 | remove |
: : tiktok and peppa pig : : :
: : repeated : : :
| shu | quite questionable. prob | 53 | remove |
: : remove : : :
| ks | ok shorter docs | 51 | remove |
| gyn | remove boilerplate and | 45 | remove |
: : porn : : :
| aa | some pretty bad data but | 32 | remove |
: : also some good data. : : :
: : filter on "Woo" (case : : :
: : sensitive) : : :
| sjp | terible; probably | 31 | remove |
: : remove; check again : : :
: : after short filter : : :
| abs | all short nonsense | 24 | remove |
: : remove : : :
| mui | remove short docs | 23 | remove |
| mdh | filter porn short text | 22 | remove |
: : and repetitive : : :
: : boilerplate : : :
| noe | ok | 22 | remove |
| sxu | rvisit after shortfilter | 22 | remove |
| bhb-Gujr | bad. remove. all junk | 20 | remove |
: : gu. : : :
| yaq | remove | 20 | remove |
| prk | ok | 18 | remove |
| cgg | rather noisy but | 17 | remove |
: : potentialy ok. not sure : : :
: : if WL or not : : :
| bto | bad; remove unless short | 16 | remove |
: : filter keeps enough : : :
| ayl | terrible | 13 | remove |
| pa-Arab | ok | 13 | remove |
| bmm | terrible. filter on | 11 | remove |
: : short and reevaluate : : :
| mfb | remove short boilerplate | 11 | remove |
| mtr | ok fix virama remove en | 11 | remove |
: : noise : : :
| pmy | remove | 11 | remove |
| skg | terrible; remove | 11 | remove |
| ymm | remove | 11 | remove |
| xnr | ok maybe fix virama | 9 | remove |
: : though it seems fine : : :
| kjb | ok bible | 8 | remove |
| azg | short noise; bible | 7 | remove |
| bgz | idk maybe ok but | 7 | remove |
: : probably bad : : :
| ctg | probably terrible | 7 | remove |
: : probably remove : : :
| nyo | ok | 7 | remove |
| mdy | ok bible | 6 | remove |
| syl-Latn | revist or remove after | 6 | remove |
: : shortfilter : : :
| xog | ok bible and stories | 6 | remove |
| cyo | terrifying noise; remove | 4 | remove |
| kfy | filter virama issue | 4 | remove |
| nd | ok | 4 | remove |
| rwr | remove | 4 | remove |
| tuf | ok bible | 4 | remove |
| clu | ok bible | 3 | remove |
| ng | ok | 3 | remove |
| zyj | deeply bad data .. | 3 | remove |
: : revisit after : : :
: : shortfilter : : :
| rkt | ok | 2 | remove |
| bgc | super sketch. Remove | 1 | remove |
: : unless short doc filter : : :
: : leaves some. remove : : :
| dcc | remove | 1 | remove |
| ff-Adlm | good | 1 | remove |
| gju | remove short boilerplate | 1 | remove |
| max | remove short some ru | 1 | remove |
| mwr | filter short docs fix | 1 | remove |
: : virama : : :
| trw | sus; remove | 1 | remove |
| vkt | 1 doc remove | 1 | remove |
| gjk | empty remove | 0 | remove |
| bfy | very bad. remove unless | 0 | remove |
: : it looks better after : : :
: : filtering short docs; : : :
: : remove : : :
| nyn | ok | 0 | remove |
| sgj | remove | 0 | remove |
A few comments too long to fit in the table above:
* `alt`: WAIT THIS IS AMAZING IT IS ACTUALLY ALTAI! e.g. from urls like
https://altaicholmon.ru/2020/02/28/jarashty-la-jajaltany-jarkyndu-lekeri/
* `tly-IR`: They all look like boilerplate content, e.g., list of
keywords/search queries used to bump page ranking in search results. Not any
useful material for translation. Remove.
* `zap`: pls note that at least some Zapotec speakers tend to view it as one
language, not as a million dialects like ISO does. However, some are
certainly mutually unintelligible, complicating the matter.
* `zh-Latn`: The biggest problem is that several examples are not in Latin
Chinese (i.e., romanization in my understanding) but in English or mixed
English and Chinese. For those data in Latin Chinese, their quality seems to
be good.
* `zh`: Many examples are porn-related, particularly those very long
documents. Also, there are some examples of traditional Chinese.
## Final Dataset information
The number of documents, sentences, tokens, characters, and bytes for the noisy
and clean splits of the data. Note that the "toks" field below uses whitespace
for tokenization, so is not appropriate for non-whitespace-separating languages
like Chinese (see section above). Note that the english subset in this version
is missing 18% of documents that were included in the published analysis of the dataset.
These documents will be incoporated in an update coming soon.
BCP-47 | docs (noisy) | docs (clean) | sents (noisy) | sents (clean) | toks (noisy) | toks (clean) | chars (noisy) | chars (clean) | clean | noisy |
----------------|:---------------|:---------------|:----------------|:----------------|:---------------|:---------------|:----------------|:----------------|:---------|:---------|
total* | 7.2B | 3.7B | 133.1B | 97.5B | 4.6T | 2.6T | 30.6T | 16.0T | 11.4 T | 6.3 T
en* | 3.0B | 1.5B | 71.1B | 45.4B | 2.0T | 1.3T | 12.3T | 7.6T | 2.6 T | 4.3 T |
ru | 823M | 402.5M | 823M | 12.4B | 416.5B | 240.9B | 3.1T | 1.8T | 832.9 G | 1.4 T |
es | 476.4M | 250.9M | 8.3B | 4.5B | 325.7B | 170.4B | 2.1T | 1.1T | 380.9 G | 747.5 G |
de | 478.6M | 225.1M | 11.5B | 6B | 299.5B | 139.6B | 2.2T | 1T | 370.6 G | 815.5 G |
fr | 384.2M | 218.9M | 7.9B | 5B | 307.1B | 165.2B | 2T | 1T | 370.4 G | 699.1 G |
it | 238.9M | 126.4M | 4.5B | 2.5B | 180.1B | 83.6B | 1.2T | 553.1B | 198.4 G | 429.6 G |
pt | 209.2M | 124.2M | 4B | 2.4B | 123.2B | 79.2B | 791.5B | 499.8B | 183.1 G | 289.6 G |
pl | 145.1M | 90.9M | 3.3B | 2.4B | 68.9B | 49.2B | 505B | 356.4B | 140.7 G | 202.5 G |
nl | 134.5M | 86.6M | 134.5M | 2.3B | 104.4B | 51.6B | 698.5B | 334.5B | 118.2 G | 247.5 G |
tr | 107M | 56.4M | 107M | 1.2B | 41.9B | 25B | 328.8B | 198.9B | 73.7 G | 123.9 G |
vi | 92.8M | 55M | 1.6B | 1B | 71.5B | 48.7B | 342B | 228.8B | 88.8 G | 133.9 G |
cs | 72.1M | 38.3M | 1.7B | 1B | 40.8B | 22.1B | 272.2B | 147.9B | 62.1 G | 112.7 G |
id | 120.9M | 38M | 2.2B | 747.5M | 60.4B | 20.2B | 443B | 148.3B | 48.5 G | 148.7 G |
ro | 60.8M | 35.4M | 60.8M | 746.4M | 37.1B | 22.9B | 244.1B | 148.2B | 55.5 G | 90.3 G |
sv | 65.2M | 35.2M | 65.2M | 1B | 62.1B | 23.9B | 422.6B | 153.7B | 57.0 G | 149.9 G |
hu | 47.6M | 29.7M | 1.3B | 806.3M | 29.8B | 17.8B | 223.6B | 134.9B | 53.5 G | 86.8 G |
uk | 46.6M | 25M | 1B | 599.9M | 21.6B | 12.8B | 164.2B | 95.2B | 45.1 G | 75.8 G |
fa | 58.1M | 23.1M | 920.6M | 493.5M | 40.6B | 18.4B | 220.4B | 96.7B | 43.4 G | 97.4 G |
ja | 23.3M | 21.8M | 326M | 321.6M | 10.9B | 10.9B | 133.3B | 132.2B | 98.7 G | 99.7 G |
el | 52.4M | 20.9M | 808M | 445.4M | 25B | 12B | 173.2B | 80.9B | 37.9 G | 80.8 G |
fi | 35.8M | 20.4M | 1B | 650.3M | 23.8B | 11.5B | 202.2B | 101.1B | 37.6 G | 74.1 G |
zh | 29.3M | 19.9M | 492.3M | 298.8M | 19.2B | 10B | 333B | 142.3B | 109.9 G | 191.8 G |
da | 38.5M | 17.9M | 1.1B | 508M | 37.7B | 13B | 252B | 83.1B | 29.4 G | 89.5 G |
th | 19M | 17.4M | 19M | 385.8M | 8.9B | 8.9B | 118.6B | 117.6B | 57.6 G | 58.2 G |
no | 34.7M | 14.9M | 34.7M | 498.7M | 46.6B | 11.8B | 305.6B | 74.8B | 27.3 G | 109.8 G |
bg | 27.2M | 12.8M | 599.4M | 360.3M | 14.4B | 8.8B | 95.6B | 57.8B | 26.0 G | 42.8 G |
ko | 19.7M | 12.7M | 628.6M | 471.8M | 13.3B | 9.3B | 65.9B | 43.8B | 34.2 G | 49.1 G |
ar | 67.6M | 12.4M | 876.6M | 182.6M | 39B | 7.1B | 243B | 43.2B | 20.9 G | 115.9 G |
sk | 23.2M | 11.9M | 487.9M | 300.6M | 11.3B | 6.7B | 77.8B | 45.7B | 18.8 G | 31.9 G |
ca | 17.9M | 9.5M | 258.6M | 153M | 8.9B | 5.6B | 56.5B | 34.6B | 12.6 G | 20.8 G |
lt | 15.3M | 8.7M | 374M | 256.9M | 7.5B | 5.3B | 58.6B | 41.3B | 15.7 G | 22.3 G |
he | 14.1M | 7.2M | 302.2M | 196.8M | 9.2B | 5.2B | 54.9B | 30.5B | 14.8 G | 26.3 G |
sl | 12M | 6.3M | 316M | 180M | 6.9B | 4.5B | 47.8B | 30.5B | 11.5 G | 18.0 G |
et | 8.8M | 5.5M | 223.8M | 176.3M | 5B | 3.6B | 40.1B | 28.7B | 10.7 G | 15.0 G |
lv | 8.4M | 5M | 186.1M | 138.5M | 4.8B | 3.2B | 36.7B | 23.9B | 9.1 G | 13.8 G |
hi | 9.9M | 4.5M | 254.4M | 152M | 7.4B | 3.8B | 39.9B | 20.1B | 9.9 G | 19.7 G |
sq | 5.5M | 3.6M | 5.5M | 56.1M | 2.7B | 2.1B | 17B | 12.7B | 4.8 G | 6.6 G |
az | 5.2M | 3.3M | 90.3M | 70.9M | 2.1B | 1.5B | 16.3B | 11.9B | 4.5 G | 6.3 G |
hr | 23M | 2.8M | 476.6M | 53M | 12.6B | 1.4B | 85.1B | 9.6B | 3.7 G | 33.5 G |
ta | 5.6M | 2.6M | 122.5M | 81.9M | 2.1B | 1.1B | 19.2B | 10.6B | 4.9 G | 8.8 G |
ms | 14.1M | 2.3M | 14.1M | 55.2M | 8B | 1.7B | 58.8B | 12.5B | 4.0 G | 20.4 G |
ml | 3.7M | 2.1M | 75M | 52M | 1B | 603.3M | 10.5B | 6.3B | 3.0 G | 5.1 G |
sr | 4.7M | 2M | 4.7M | 64M | 2.7B | 1.6B | 18.6B | 11B | 5.1 G | 8.7 G |
kk | 3.1M | 1.8M | 87.4M | 59.1M | 1.6B | 1B | 13.4B | 8.6B | 3.8 G | 5.8 G |
te | 2.5M | 1.7M | 59M | 46.4M | 900.2M | 618.5M | 7.4B | 5.1B | 2.6 G | 3.8 G |
mr | 2.9M | 1.7M | 2.9M | 50M | 1.2B | 776.9M | 8.7B | 5.5B | 2.8 G | 4.4 G |
is | 2.9M | 1.6M | 73.7M | 39.3M | 2.1B | 979.2M | 14.9B | 6.4B | 2.5 G | 5.9 G |
bs | 12.9M | 1.4M | 163.6M | 9M | 5.9B | 490.9M | 39.5B | 3.3B | 1.3 G | 15.6 G |
mk | 2.9M | 1.4M | 41.3M | 22.6M | 1.3B | 685.9M | 9.1B | 4.5B | 2.0 G | 4.0 G |
gl | 4.2M | 1.3M | 45.3M | 18.8M | 2.3B | 748.4M | 15.6B | 4.8B | 1.7 G | 5.5 G |
eu | 2.1M | 1.2M | 41.7M | 24.8M | 827.5M | 525.3M | 6.9B | 4.3B | 1.5 G | 2.4 G |
bn | 4.3M | 1.1M | 151.2M | 38.6M | 2.5B | 645.7M | 16.8B | 4.3B | 2.2 G | 8.7 G |
be | 2M | 1.1M | 48.8M | 31.3M | 981M | 632.9M | 7.2B | 4.6B | 2.2 G | 3.5 G |
ka | 3.1M | 936.5K | 53.7M | 26.6M | 1.2B | 460.8M | 10.3B | 3.8B | 1.9 G | 5.0 G |
fil | 4.2M | 901.5K | 67.4M | 19.2M | 2.2B | 741.7M | 14.6B | 4.7B | 1.5 G | 5.0 G |
mn | 2.2M | 879.9K | 43.3M | 24M | 1.1B | 487.5M | 7.9B | 3.5B | 1.6 G | 3.5 G |
af | 2.9M | 868.7K | 51.9M | 30M | 1.7B | 795M | 11.8B | 4.8B | 1.8 G | 4.2 G |
uz | 1.4M | 669.9K | 25.7M | 17.5M | 605.9M | 388.3M | 5.2B | 3.3B | 1.1 G | 1.9 G |
gu | 1.3M | 659.7K | 28.9M | 18.1M | 634.4M | 345.9M | 3.9B | 2.1B | 1.1 G | 2.0 G |
kn | 1.6M | 657.8K | 32.9M | 19.2M | 546.4M | 258.6M | 4.6B | 2.2B | 1.1 G | 2.3 G |
kaa | 1.1M | 586.4K | 19.8M | 13.3M | 455.9M | 269M | 3.8B | 2.2B | 990.2 M | 1.6 G |
sw | 1.3M | 537.8K | 1.3M | 9.5M | 660.7M | 345.8M | 4.6B | 2.4B | 826.1 M | 1.6 G |
ur | 967.2K | 467.2K | 29M | 18.4M | 1B | 562.5M | 5.2B | 2.7B | 1.2 G | 2.4 G |
ne | 876.4K | 453.3K | 876.4K | 20.4M | 585M | 345.3M | 3.9B | 2.2B | 1.1 G | 1.9 G |
cy | 4.9M | 430.7K | 68.3M | 7.4M | 3.6B | 275.6M | 26.4B | 1.7B | 609.5 M | 10.0 G |
hy | 2M | 397.5K | 31.1M | 9.9M | 1B | 190.9M | 8.1B | 1.5B | 678.9 M | 3.6 G |
ky | 751.1K | 367.6K | 14.3M | 9.6M | 303.4M | 181.6M | 2.5B | 1.4B | 665.1 M | 1.1 G |
si | 788K | 349.2K | 22.1M | 16M | 507.3M | 293.3M | 3.4B | 1.9B | 1023.6 M | 1.8 G |
tt | 2.1M | 346.9K | 60.2M | 8.6M | 1B | 135M | 12.1B | 1B | 494.1 M | 4.6 G |
tg | 789.2K | 328.2K | 789.2K | 7.4M | 363.8M | 208.8M | 2.6B | 1.4B | 635.7 M | 1.1 G |
la | 2.9M | 319.2K | 85.7M | 13.8M | 1.1B | 218.4M | 8.2B | 1.5B | 550.6 M | 2.9 G |
so | 729.2K | 293.2K | 729.2K | 3.1M | 294.8M | 146.3M | 2.1B | 992.4M | 350.8 M | 746.2 M |
ga | 5.3M | 286K | 31.7M | 6.9M | 4.2B | 229.3M | 30.6B | 1.4B | 500.7 M | 9.8 G |
km | 297.8K | 285.7K | 5M | 5M | 53M | 52.6M | 1.1B | 1.1B | 566.2 M | 570.0 M |
mt | 1.2M | 265.4K | 1.2M | 5.6M | 390.4M | 171.5M | 3.2B | 1.3B | 467.4 M | 1.1 G |
eo | 1.4M | 260K | 33.9M | 9.3M | 745.1M | 253.1M | 5.5B | 1.7B | 627.6 M | 1.9 G |
ps | 429.9K | 252.9K | 5.1M | 3.6M | 293.9M | 177.5M | 1.4B | 848.9M | 403.5 M | 682.9 M |
rw | 681.8K | 226.5K | 681.8K | 1.9M | 225M | 99.8M | 1.7B | 749.1M | 264.8 M | 702.4 M |
ku | 671.9K | 218.9K | 10.7M | 4.9M | 305.3M | 143.8M | 2.1B | 849.9M | 335.3 M | 791.9 M |
lo | 229.1K | 216K | 2.9M | 2.8M | 41.7M | 41.1M | 706.9M | 697.6M | 365.3 M | 370.8 M |
fy | 1.7M | 210K | 12.1M | 3.7M | 506.9M | 94M | 3.7B | 592.3M | 223.0 M | 1.2 G |
ha | 443.9K | 173.5K | 4.5M | 2.4M | 206.5M | 109.3M | 1.3B | 630.2M | 219.0 M | 478.1 M |
my | 176.5K | 172.4K | 176.5K | 10.1M | 96.6M | 96.3M | 1.3B | 1.3B | 648.8 M | 650.4 M |
dv | 264.4K | 167.2K | 4.3M | 3.5M | 92.8M | 64M | 877.3M | 603.1M | 238.3 M | 343.2 M |
pa | 368.2K | 150.6K | 368.2K | 6M | 306M | 152.8M | 1.6B | 797.1M | 414.1 M | 857.6 M |
ckb | 622.7K | 148.9K | 5.6M | 2.5M | 312.7M | 83.3M | 2.2B | 572.7M | 265.0 M | 1011.1 M |
lb | 7.6M | 146K | 47.1M | 3.4M | 7.5B | 85M | 58.4B | 575.5M | 218.4 M | 22.2 G |
mg | 295.2K | 115.4K | 4.5M | 2.6M | 189.4M | 75.5M | 1.3B | 548.5M | 179.0 M | 429.3 M |
ht | 425.6K | 110.4K | 6.7M | 2.6M | 163M | 84.3M | 994.5M | 461.5M | 168.2 M | 361.5 M |
ug | 227.1K | 106.5K | 4.5M | 3.1M | 122.9M | 62.7M | 998.5M | 504.6M | 233.1 M | 449.9 M |
am | 245.2K | 106.3K | 7.1M | 5.3M | 157M | 95.2M | 869.9M | 509M | 345.5 M | 539.4 M |
or | 139.6K | 100.5K | 139.6K | 3.1M | 66M | 47.3M | 437.2M | 309.5M | 160.3 M | 228.1 M |
fo | 382.9K | 97.8K | 3.9M | 1.8M | 136.5M | 48.9M | 923.3M | 314.9M | 122.0 M | 328.8 M |
gd | 206K | 94.3K | 3.7M | 2.4M | 127.6M | 84.5M | 812M | 526M | 173.4 M | 276.6 M |
ba | 372.4K | 90.3K | 9.3M | 2.6M | 101M | 42.1M | 766.5M | 320.7M | 154.8 M | 352.4 M |
tk | 180.2K | 82.5K | 180.2K | 1.8M | 65.4M | 43.3M | 575.2M | 369M | 131.3 M | 221.6 M |
mi | 711.9K | 79.5K | 5.9M | 1.9M | 262.5M | 73.5M | 1.6B | 371.9M | 120.2 M | 539.1 M |
hmn | 241.3K | 75.2K | 3.5M | 1.9M | 192.1M | 80.2M | 1.2B | 408.8M | 124.3 M | 366.0 M |
grc | 364.8K | 70.7K | 13.7M | 2.8M | 298.6M | 65.3M | 2B | 417.8M | 217.7 M | 1.0 G |
jv | 999.5K | 69.5K | 13M | 2M | 302.3M | 52.1M | 2.3B | 376.1M | 130.9 M | 797.8 M |
ceb | 617.5K | 66.2K | 6.7M | 1.6M | 225M | 58.2M | 1.5B | 357.7M | 116.2 M | 451.4 M |
sd | 115.6K | 65.9K | 115.6K | 2.4M | 112.6M | 77.8M | 561M | 380.4M | 182.3 M | 267.1 M |
yi | 160.6K | 64.9K | 3.3M | 1.9M | 129.1M | 53.9M | 838.4M | 352.6M | 146.0 M | 350.8 M |
kaa_Latn | 375.2K | 61.2K | 3.6M | 1.3M | 375.2K | 61.2K | 1.5M | 209.5K | 86.2 M | 264.6 M |
sn | 3.1M | 60.2K | 3.1M | 1.2M | 1.3B | 31.6M | 10.6B | 266M | 92.5 M | 3.2 G |
co | 546.7K | 55.4K | 6.1M | 1.3M | 172.6M | 43.6M | 1.1B | 265.5M | 98.8 M | 386.8 M |
su | 336.6K | 55K | 336.6K | 1.6M | 154M | 39.5M | 967.2M | 286.7M | 100.7 M | 308.5 M |
pap | 259.1K | 54.5K | 259.1K | 1.4M | 183.9M | 41.1M | 1.4B | 229.9M | 83.5 M | 451.4 M |
ig | 130.4K | 54.4K | 2.1M | 1.4M | 129.2M | 45.7M | 846.1M | 251.4M | 93.0 M | 178.9 M |
zu | 372.3K | 53.8K | 3.8M | 1.2M | 148.4M | 27.2M | 1.2B | 257.4M | 89.6 M | 374.7 M |
xh | 310.9K | 53.7K | 2.9M | 1.4M | 81.6M | 31.2M | 749.5M | 287.3M | 100.0 M | 319.1 M |
sm | 137.8K | 52.6K | 1.9M | 1.3M | 100.9M | 53.7M | 607.9M | 276.3M | 88.6 M | 184.5 M |
ny | 181.6K | 52.2K | 181.6K | 1.5M | 80.6M | 34.8M | 611.2M | 277.5M | 91.8 M | 209.8 M |
yo | 115K | 52.1K | 2M | 1.2M | 76.6M | 46.3M | 415.6M | 239M | 89.2 M | 157.8 M |
cv | 599.4K | 47.3K | 12M | 1.6M | 169.6M | 22.2M | 1B | 168.9M | 82.1 M | 413.6 M |
el_Latn | 497.3K | 46.4K | 11.3M | 1.7M | 497.3K | 46.4K | 2.3M | 162.8K | 196.8 M | 571.1 M |
kl | 85.9K | 46K | 2.1M | 1.5M | 32.3M | 22.3M | 403.9M | 279.1M | 84.2 M | 126.1 M |
haw | 310.4K | 45.7K | 7.1M | 1M | 141M | 43.3M | 892M | 214.2M | 69.9 M | 271.2 M |
gsw | 7.6M | 42.7K | 64.5M | 1M | 5B | 22.3M | 42.3B | 149.2M | 53.8 M | 13.5 G |
tet | 291K | 40.4K | 1.9M | 475.7K | 240.6M | 22.8M | 1.6B | 152.3M | 51.2 M | 455.4 M |
st | 96.8K | 40.4K | 96.8K | 1.1M | 65M | 39.8M | 381.5M | 226.9M | 74.0 M | 127.0 M |
lus | 91.5K | 36.4K | 1.4M | 863.5K | 53M | 31.3M | 298.3M | 167.3M | 60.1 M | 107.0 M |
oc | 2.4M | 36.4K | 2.4M | 1.6M | 887.6M | 26.7M | 6.7B | 177.6M | 58.7 M | 1.9 G |
as | 53.9K | 33.8K | 2.4M | 1.7M | 41.4M | 27.9M | 275.8M | 182.1M | 95.8 M | 146.1 M |
rm | 238.1K | 33.8K | 238.1K | 603.4K | 59.2M | 15.8M | 391M | 100.2M | 34.6 M | 133.1 M |
br | 705.4K | 33.2K | 7.8M | 731.7K | 646.8M | 21M | 3.7B | 125.4M | 46.2 M | 1.2 G |
sah | 1.3M | 29.2K | 1.3M | 1.2M | 283.7M | 17.6M | 2.2B | 148.2M | 68.3 M | 852.3 M |
hi_Latn | 1.2M | 26.7K | 22.6M | 1.2M | 1.2M | 26.7K | 5.3M | 98.9K | 53.5 M | 1.7 G |
se | 54.3K | 23.9K | 879.5K | 493.3K | 17.7M | 10M | 148.4M | 84.6M | 31.1 M | 56.6 M |
cnh | 44.4K | 21.6K | 688.6K | 406.9K | 21.6M | 12.5M | 110.8M | 63M | 22.1 M | 39.6 M |
om | 846.1K | 18.9K | 846.1K | 469.8K | 238M | 11.2M | 1.9B | 88.5M | 30.4 M | 881.5 M |
ce | 59.3K | 15K | 991.1K | 460.1K | 17.8M | 9.6M | 130.6M | 67.8M | 31.1 M | 60.2 M |
udm | 67.1K | 13.4K | 942.7K | 510.3K | 14M | 7.4M | 106M | 55.5M | 26.3 M | 49.2 M |
lg | 61.1K | 13K | 510.9K | 166.1K | 21.4M | 6.1M | 160.7M | 48M | 17.3 M | 56.7 M |
os | 172.1K | 12.6K | 172.1K | 359.3K | 27.1M | 6.9M | 233.5M | 50.1M | 23.1 M | 87.7 M |
nv | 17.1K | 12.6K | 17.1K | 86.5K | 3.1M | 1.1M | 24.8M | 9.1M | 2.0 M | 7.9 M |
kha | 37.8K | 12.1K | 235.5K | 75.2K | 15.8M | 6M | 88.6M | 30.2M | 9.8 M | 27.3 M |
ilo | 69.8K | 11.8K | 889.2K | 365.1K | 26.7M | 9M | 187.9M | 59.4M | 20.6 M | 64.0 M |
ctd_Latn | 23.3K | 11.6K | 575.6K | 382.2K | 23.3K | 11.6K | 90.7K | 41K | 21.5 M | 35.1 M |
vec | 1.1M | 11.1K | 10M | 209.7K | 284.7M | 7.8M | 1.8B | 43.8M | 17.7 M | 625.0 M |
hil | 126.8K | 10.6K | 1.1M | 379.7K | 43.9M | 9.2M | 293.5M | 57.2M | 18.5 M | 95.2 M |
tyv | 61.6K | 9.1K | 596.6K | 268.3K | 9.9M | 4.7M | 80.2M | 38.5M | 16.7 M | 36.6 M |
iba | 34K | 7.6K | 326.9K | 126.1K | 37.8M | 4.8M | 251.4M | 30.5M | 10.0 M | 61.3 M |
ru_Latn | 346.3K | 7.5K | 346.3K | 239.1K | 346.3K | 7.5K | 1.5M | 27.7K | 14.9 M | 452.3 M |
kbd | 154.7K | 7.5K | 1.4M | 257.2K | 31.9M | 4.4M | 321.4M | 36.8M | 16.8 M | 209.6 M |
ti | 20.8K | 7.3K | 20.8K | 481.3K | 18.2M | 8.8M | 95.4M | 44.6M | 30.9 M | 63.6 M |
sa | 154.3K | 7.1K | 154.3K | 1.1M | 70M | 9.9M | 512.5M | 88.8M | 44.9 M | 236.6 M |
av | 107.6K | 6.3K | 806.1K | 190.1K | 15.5M | 3.4M | 129M | 30.2M | 12.8 M | 56.0 M |
bo | 6.2K | 6.2K | 1.1M | 1.1M | 3.4M | 3.4M | 88.7M | 88.7M | 40.7 M | 40.7 M |
zza | 370.1K | 6K | 3.3M | 229.2K | 87.7M | 3.9M | 617.3M | 26.3M | 10.0 M | 234.1 M |
ber_Latn | 480.5K | 5.6K | 10.5M | 169.4K | 480.5K | 5.6K | 2.1M | 18.9K | 11.0 M | 945.3 M |
otq | 17.6K | 5.6K | 17.6K | 114.8K | 10.2M | 3.8M | 65M | 23.4M | 7.7 M | 22.8 M |
te_Latn | 236.6K | 5.3K | 4.4M | 269.1K | 236.6K | 5.3K | 1M | 19.3K | 11.4 M | 254.3 M |
bua | 9.8K | 5.3K | 252K | 144.6K | 4.7M | 2.7M | 38M | 21.7M | 10.0 M | 17.9 M |
ts | 34.7K | 5.2K | 34.7K | 248.6K | 39.6M | 6.5M | 377.2M | 38.8M | 12.2 M | 99.5 M |
cfm | 9.1K | 4.9K | 199.6K | 128.6K | 6.2M | 4M | 32.9M | 21.5M | 7.4 M | 11.6 M |
tn | 138.2K | 4.8K | 138.2K | 174.4K | 46M | 5.5M | 302.3M | 29.2M | 9.4 M | 99.0 M |
krc | 359.5K | 4.8K | 2.3M | 153.9K | 50.2M | 2.6M | 369.5M | 20.7M | 9.1 M | 139.9 M |
ak | 19.5K | 4.8K | 341.7K | 210.2K | 12.3M | 4.7M | 74.5M | 24.8M | 9.1 M | 24.7 M |
meo | 790.7K | 4.7K | 16.5M | 39K | 478M | 1.2M | 3B | 7.5M | 3.1 M | 1.2 G |
chm | 81.5K | 4.7K | 929.1K | 179.7K | 17.2M | 2.9M | 132.2M | 21.3M | 9.8 M | 53.5 M |
to | 14.3K | 4.6K | 14.3K | 149K | 10.3M | 5.7M | 58.2M | 29.9M | 9.6 M | 19.0 M |
ee | 14.1K | 4.5K | 353.6K | 246.7K | 9.7M | 6.2M | 67.9M | 32.8M | 11.8 M | 23.3 M |
nso | 376.2K | 4.4K | 376.2K | 188.4K | 419.2M | 5.3M | 2B | 28.2M | 9.1 M | 502.7 M |
ady | 74.9K | 4.2K | 446.8K | 96.9K | 8M | 1.6M | 67.9M | 14.8M | 6.4 M | 30.6 M |
rom | 22.9K | 4.2K | 22.9K | 76.1K | 8.9M | 2.6M | 59M | 15.9M | 5.8 M | 21.0 M |
bho | 13.6K | 4.1K | 306.2K | 118.5K | 7.1M | 2.7M | 37.6M | 13.4M | 7.4 M | 20.6 M |
ltg | 13.1K | 4.1K | 213.7K | 87.3K | 4M | 1.9M | 29.2M | 13.9M | 5.6 M | 11.7 M |
fj | 17K | 4K | 410K | 164.1K | 11.6M | 5.2M | 67.7M | 28M | 8.6 M | 22.5 M |
yua | 10.4K | 4K | 141.6K | 77.6K | 5.2M | 2.5M | 36.8M | 17.2M | 5.7 M | 12.4 M |
gn | 87.1K | 3.9K | 770.9K | 162.6K | 19.2M | 2.7M | 140.7M | 20.8M | 7.8 M | 52.1 M |
az_RU | 6.5K | 3.8K | 231.8K | 177.3K | 6.5K | 3.8K | 24K | 12.9K | 10.3 M | 15.1 M |
ln | 94.7K | 3.3K | 718.7K | 139K | 42.4M | 3.4M | 291.8M | 21.5M | 6.8 M | 85.3 M |
ada | 6.5K | 3.1K | 291.5K | 199.2K | 7.5M | 4.9M | 38.9M | 24.2M | 8.6 M | 13.9 M |
myv | 164.8K | 3.1K | 164.8K | 130K | 16M | 1.7M | 120.3M | 13.8M | 6.2 M | 49.5 M |
bik | 44.8K | 3.1K | 376.7K | 77K | 14.8M | 2.5M | 102.3M | 15.7M | 5.3 M | 34.0 M |
tlh | 516.9K | 3.1K | 516.9K | 46.9K | 221.3M | 1.1M | 1.4B | 7.8M | 2.7 M | 554.2 M |
kbp | 5.9K | 3K | 247.9K | 128.3K | 5.6M | 2.6M | 30.8M | 14.6M | 5.7 M | 12.4 M |
war | 1M | 2.9K | 114M | 96.2K | 612.1M | 2.4M | 3.5B | 16.1M | 3.7 M | 1.2 G |
wa | 70.6K | 2.8K | 1.5M | 127.2K | 35.2M | 3.6M | 198.8M | 20.4M | 7.2 M | 67.8 M |
bew | 311.1K | 2.7K | 10.4M | 58.4K | 212.4M | 1.3M | 1.4B | 8.5M | 3.1 M | 547.1 M |
rcf | 21.6K | 2.6K | 21.6K | 50.5K | 4.9M | 1.2M | 30.2M | 5.7M | 2.1 M | 11.4 M |
ta_Latn | 260.7K | 2.6K | 3.4M | 142.7K | 260.7K | 2.6K | 1.2M | 9.1K | 5.0 M | 215.4 M |
kac | 5.9K | 2.6K | 109.2K | 77.4K | 5M | 2.8M | 26.6M | 13.6M | 4.3 M | 8.0 M |
iu | 5.4K | 2.5K | 92.6K | 53.1K | 1.9M | 907.4K | 17.5M | 8.3M | 4.8 M | 9.9 M |
ay | 8.1K | 2.5K | 196.7K | 83.8K | 3.9M | 1.4M | 34.5M | 13.1M | 4.5 M | 12.7 M |
kum | 4.2K | 2.5K | 132.2K | 89.7K | 2.3M | 1.6M | 18.2M | 12.4M | 5.3 M | 8.0 M |
qu | 149.7K | 2.4K | 1M | 87K | 26.7M | 1.3M | 200.6M | 12.2M | 4.0 M | 68.3 M |
bgp | 355.7K | 2.4K | 5.6M | 43.3K | 186.1M | 1.8M | 1.1B | 9.8M | 3.1 M | 377.5 M |
hif | 702K | 2.4K | 7.9M | 124.7K | 1.2B | 3.2M | 9.1B | 19.1M | 5.9 M | 3.5 G |
kw | 176.9K | 2.3K | 1M | 51.6K | 53.1M | 1.3M | 327.8M | 7.7M | 2.8 M | 89.2 M |
nan_Latn_TW | 7.4K | 2.3K | 7.4K | 72.7K | 7.4K | 2.3K | 28.3K | 7.7K | 4.8 M | 15.4 M |
srn | 16.7K | 2.3K | 16.7K | 139.5K | 8M | 3.4M | 49.1M | 17M | 5.1 M | 15.6 M |
tly_IR | 406.3K | 2.2K | 406.3K | 18.2K | 406.3K | 2.2K | 1.6M | 8.6K | 580.4 K | 283.0 M |
sg | 4.2K | 2.1K | 154K | 117.9K | 4.6M | 3.3M | 22.6M | 15.5M | 4.6 M | 6.8 M |
gom | 4.6K | 2.1K | 178.3K | 108K | 2.7M | 1.4M | 19.8M | 10M | 5.0 M | 10.5 M |
ml_Latn | 260.8K | 2.1K | 3.5M | 77.3K | 260.8K | 2.1K | 1.1M | 7.2K | 3.5 M | 277.7 M |
kj | 112.2K | 2.1K | 881.8K | 22.6K | 46.9M | 877.3K | 339.6M | 6M | 2.1 M | 104.9 M |
ksd | 14.9K | 2K | 533K | 78.6K | 11.5M | 2.1M | 62.4M | 10M | 2.9 M | 20.0 M |
dz | 1.9K | 1.9K | 191.7K | 191.7K | 1.1M | 1.1M | 22.7M | 22.7M | 10.0 M | 10.0 M |
kv | 59.1K | 1.9K | 584.3K | 88.8K | 9.5M | 1.2M | 91.4M | 9M | 4.4 M | 41.0 M |
msi | 686.7K | 1.9K | 686.7K | 22.6K | 414.8M | 440.4K | 2.6B | 2.7M | 1.1 M | 1.0 G |
ve | 3.8K | 1.9K | 97.8K | 79.4K | 3.2M | 2.1M | 19M | 11.7M | 3.8 M | 6.2 M |
zap | 5.5K | 1.8K | 202.3K | 93.5K | 4.2M | 1.8M | 26.4M | 11.4M | 4.0 M | 9.6 M |
zxx_xx_dtynoise | 118.8K | 1.8K | 3.8M | 49.3K | 118.8K | 1.8K | 501K | 6.6K | 3.9 M | 367.0 M |
meu | 5.9K | 1.7K | 232.1K | 72.6K | 4.2M | 1.4M | 27.2M | 8.6M | 2.6 M | 9.1 M |
iso | 3.7K | 1.7K | 155.8K | 111.5K | 4.4M | 2.7M | 23M | 13.7M | 4.9 M | 8.1 M |
ium | 100.3K | 1.7K | 6.2M | 54.9K | 48.4M | 1.7M | 314M | 7.4M | 2.6 M | 124.0 M |
nhe | 3K | 1.7K | 3K | 57.7K | 1.9M | 1.2M | 15.6M | 9.8M | 2.7 M | 4.8 M |
tyz | 8K | 1.7K | 454.8K | 104.6K | 7.5M | 1.9M | 46.3M | 11.3M | 3.8 M | 16.0 M |
hui | 2K | 1.7K | 80.1K | 74.7K | 1.8M | 1.7M | 11.8M | 10.9M | 3.0 M | 3.3 M |
new | 6.6K | 1.6K | 6.6K | 85K | 3.2M | 1.4M | 21.2M | 8.8M | 4.4 M | 10.6 M |
mdf | 71K | 1.6K | 394.7K | 45.1K | 8.3M | 670.1K | 65.8M | 5.5M | 2.5 M | 26.7 M |
pag | 49.6K | 1.6K | 49.6K | 88.8K | 13.8M | 1.9M | 92.9M | 12M | 3.9 M | 29.2 M |
gv | 501.9K | 1.6K | 18.8M | 26.9K | 137.7M | 996.2K | 933.1M | 6.2M | 2.0 M | 318.6 M |
gag | 33.9K | 1.6K | 491K | 37K | 10.2M | 661K | 84.9M | 5.2M | 2.1 M | 32.6 M |
ngu | 3.8K | 1.5K | 3.8K | 87.1K | 2.7M | 1.5M | 21.4M | 11.8M | 3.6 M | 6.7 M |
quc | 4.4K | 1.5K | 89.2K | 41.2K | 2.8M | 1.1M | 16.6M | 6.4M | 2.2 M | 5.9 M |
mam | 23K | 1.5K | 446.3K | 52.9K | 9.8M | 1.2M | 70.4M | 7.2M | 2.6 M | 30.7 M |
min | 28.2K | 1.5K | 500.9K | 75.6K | 10.2M | 1.4M | 70.5M | 9.9M | 2.6 M | 21.1 M |
ho | 2K | 1.5K | 57K | 47.8K | 1.8M | 1.3M | 12.3M | 7.8M | 1.9 M | 3.1 M |
pon | 5.7K | 1.5K | 167.8K | 48.7K | 3M | 1.1M | 18.3M | 6.7M | 2.1 M | 6.1 M |
mrj | 97.1K | 1.4K | 97.1K | 60.3K | 14.5M | 1.1M | 100.6M | 7.6M | 3.6 M | 40.8 M |
lu | 10.6K | 1.4K | 316K | 112.1K | 7.8M | 2.3M | 54.2M | 15.4M | 4.8 M | 18.0 M |
gom_Latn | 231.1K | 1.4K | 4.1M | 77.9K | 231.1K | 1.4K | 1M | 5.1K | 3.6 M | 240.6 M |
alt | 2.6K | 1.4K | 110.1K | 65.9K | 1.8M | 1.1M | 14.3M | 8.7M | 3.8 M | 6.4 M |
nzi | 2.5K | 1.4K | 2.5K | 71.8K | 2.5M | 1.7M | 14.4M | 9.4M | 3.1 M | 4.8 M |
tzo | 2.8K | 1.4K | 100.4K | 75.7K | 2.5M | 1.7M | 15.9M | 10.6M | 3.2 M | 4.9 M |
bci | 7.4K | 1.3K | 124.8K | 87.1K | 5M | 1.9M | 32.8M | 9M | 3.1 M | 9.4 M |
dtp | 4.6K | 1.3K | 51.2K | 7.9K | 1.9M | 419.4K | 12.7M | 3M | 1013.9 K | 4.5 M |
abt | 1.6K | 1.3K | 122.7K | 110.3K | 1.5M | 1.3M | 9.6M | 8.2M | 2.2 M | 2.7 M |
bbc | 72.3K | 1.3K | 718.3K | 73.2K | 21.7M | 1.7M | 151.3M | 10.6M | 3.6 M | 47.9 M |
pck | 8.9K | 1.3K | 8.9K | 69.7K | 6.8M | 2.1M | 39.8M | 11.5M | 4.2 M | 14.2 M |
mai | 54.3K | 1.2K | 1M | 60.2K | 24.6M | 1.2M | 156M | 6.8M | 3.6 M | 67.1 M |
mps | 2.7K | 1.2K | 132.8K | 71.9K | 2.8M | 1.6M | 16M | 8.7M | 2.3 M | 4.8 M |
emp | 3.6K | 1.2K | 106.4K | 75.4K | 1.9M | 999.1K | 14.5M | 7.4M | 2.4 M | 4.9 M |
mgh | 5.5K | 1.2K | 151.8K | 61.2K | 2.8M | 1.1M | 24.1M | 8.2M | 2.8 M | 8.3 M |
tab | 7.8K | 1.2K | 226.4K | 26.8K | 4.3M | 538.9K | 33.7M | 4.4M | 1.9 M | 15.7 M |
crh | 5.1K | 1.2K | 170.9K | 61.8K | 2.4M | 943K | 18.8M | 7.5M | 3.4 M | 8.9 M |
tbz | 5.1K | 1.1K | 128.7K | 37.5K | 3.5M | 893.4K | 22M | 4.8M | 1.9 M | 10.2 M |
ss | 8.1K | 1.1K | 8.1K | 30.4K | 2.7M | 568.3K | 23.7M | 5.5M | 1.8 M | 7.4 M |
chk | 2.8K | 1.1K | 98.8K | 44K | 2M | 1M | 12M | 5.8M | 1.8 M | 4.0 M |
bru | 3K | 1.1K | 89.7K | 48.2K | 2.4M | 938.1K | 12.9M | 4.8M | 1.5 M | 4.5 M |
nnb | 4.9K | 1.1K | 4.9K | 70.2K | 3.2M | 1.2M | 27.7M | 9.1M | 3.3 M | 10.0 M |
fon | 5.3K | 1.1K | 222.9K | 67.3K | 6.9M | 1.8M | 34M | 8.3M | 3.1 M | 14.8 M |
ppk | 2.6K | 1.1K | 85.8K | 34.9K | 1.9M | 801.8K | 13.2M | 5.5M | 1.6 M | 4.3 M |
tiv | 3.8K | 1.1K | 3.8K | 80.7K | 3.7M | 2.1M | 20.4M | 10.2M | 3.2 M | 6.0 M |
btx | 3.1K | 1K | 81.7K | 43.9K | 2M | 907.5K | 13.1M | 5.9M | 2.0 M | 4.6 M |
bg_Latn | 200.4K | 991 | 2.8M | 25.5K | 200.4K | 991 | 927.1K | 3.7K | 1.7 M | 143.6 M |
mbt | 1.6K | 969 | 86K | 45.4K | 2.4M | 1.3M | 14.6M | 7.5M | 2.2 M | 5.1 M |
ace | 65.5K | 966 | 632.5K | 32.5K | 19.9M | 1.1M | 146.1M | 7.4M | 2.2 M | 42.3 M |
tvl | 2.3K | 933 | 72.9K | 53.6K | 2.5M | 1.7M | 12.6M | 8.1M | 2.4 M | 3.8 M |
dov | 3.5K | 923 | 129.8K | 56.7K | 2.6M | 967.5K | 20.7M | 8M | 2.6 M | 7.1 M |
ach | 2K | 915 | 63K | 40.1K | 1.6M | 890.9K | 9M | 4.7M | 1.6 M | 3.0 M |
xal | 71.8K | 913 | 498.5K | 30.8K | 8.5M | 449.8K | 64.7M | 3.2M | 1.5 M | 24.4 M |
cuk | 4.1K | 899 | 76.5K | 34.3K | 2M | 469.9K | 24.7M | 4.6M | 1.5 M | 6.1 M |
kos | 2.2K | 881 | 44.6K | 27.8K | 1.1M | 780.1K | 6.5M | 4.2M | 1.4 M | 2.2 M |
crs | 7.6K | 873 | 282.4K | 40.1K | 7.3M | 1.2M | 40.1M | 6.8M | 2.2 M | 13.2 M |
wo | 36.4K | 871 | 303.4K | 25.4K | 30.7M | 850.7K | 213.4M | 4.5M | 1.7 M | 59.9 M |
bts | 3.2K | 869 | 109.1K | 29.1K | 3.1M | 663.3K | 20.8M | 4.2M | 1.4 M | 6.2 M |
ubu | 2.2K | 846 | 113.5K | 47.5K | 2.3M | 996.4K | 15.9M | 6.7M | 1.9 M | 4.7 M |
gym | 1.5K | 820 | 73.7K | 49.6K | 1.6M | 1.1M | 10.3M | 6.9M | 2.0 M | 3.2 M |
ibb | 74.1K | 818 | 516.5K | 36.3K | 26.4M | 776.1K | 190.9M | 4.9M | 1.5 M | 56.0 M |
ape | 7K | 814 | 147K | 56.1K | 12.4M | 881.5K | 71M | 5.8M | 1.6 M | 18.8 M |
stq | 111.9K | 809 | 111.9K | 27.7K | 34.4M | 600.4K | 243.1M | 3.8M | 1.5 M | 82.5 M |
ang | 66.5K | 803 | 1.8M | 86.7K | 28.5M | 1.7M | 193M | 9.8M | 3.4 M | 67.1 M |
enq | 7.1K | 793 | 241.9K | 39.1K | 11M | 718.8K | 68.5M | 4.8M | 1.3 M | 18.8 M |
tsg | 353.8K | 789 | 353.8K | 17.9K | 158M | 588.9K | 1.1B | 3.8M | 1.0 M | 309.9 M |
shn | 889 | 788 | 46.4K | 46.2K | 383.8K | 378.5K | 5.7M | 5.7M | 2.6 M | 2.6 M |
kri | 39.1K | 786 | 271.2K | 38.8K | 12.6M | 995.2K | 86.4M | 5M | 1.6 M | 20.9 M |
kek | 3.2K | 782 | 70.4K | 38.4K | 1.8M | 709K | 13.6M | 4.4M | 1.4 M | 4.7 M |
rmc | 2.4K | 738 | 2.4K | 25.8K | 1.3M | 545.4K | 7.9M | 3.2M | 1.1 M | 2.9 M |
acf | 4.9K | 730 | 81.9K | 24.6K | 2.1M | 602.2K | 11.6M | 3M | 1.1 M | 4.7 M |
fip | 3.7K | 729 | 165.6K | 49K | 3.5M | 916.8K | 25.7M | 6.6M | 2.1 M | 8.6 M |
syr | 3.5K | 716 | 326.4K | 197.1K | 4.6M | 1.9M | 31.5M | 14M | 6.1 M | 13.9 M |
qub | 972 | 705 | 61K | 51.1K | 589.2K | 455.5K | 5.9M | 4.4M | 1.4 M | 1.8 M |
bm | 21.9K | 702 | 172.3K | 24.5K | 7.1M | 583.1K | 48.4M | 3M | 1.1 M | 14.4 M |
tzh | 1.7K | 702 | 41.7K | 33.9K | 1.5M | 929.6K | 9.3M | 5.6M | 1.6 M | 2.6 M |
jiv | 1.7K | 696 | 80.9K | 32K | 1.1M | 418.9K | 9.6M | 3.5M | 1.1 M | 3.3 M |
kn_Latn | 72.9K | 688 | 765.9K | 10.1K | 72.9K | 688 | 328.1K | 2.5K | 430.8 K | 61.4 M |
kjh | 1.5K | 672 | 42.8K | 28.7K | 566.1K | 379.2K | 4.5M | 3.1M | 1.3 M | 2.0 M |
yap | 1.9K | 638 | 37.6K | 19.5K | 1.3M | 661.4K | 6.9M | 3.3M | 1.0 M | 2.2 M |
ban | 8K | 637 | 150.9K | 16.3K | 5M | 499.7K | 35.4M | 3.6M | 1.1 M | 12.0 M |
tuc | 3.5K | 635 | 193.2K | 50.3K | 2.9M | 703K | 17.2M | 4.1M | 1.2 M | 5.7 M |
tcy | 10.7K | 632 | 338.7K | 37.1K | 5.5M | 432.6K | 41.6M | 3.3M | 1.7 M | 20.9 M |
cab | 1.2K | 629 | 50.4K | 37.5K | 1M | 690.9K | 7.5M | 5.1M | 1.6 M | 2.4 M |
cak | 1.2K | 617 | 70.4K | 32.6K | 1.3M | 730.1K | 7.6M | 4.2M | 1.3 M | 2.4 M |
din | 128.4K | 611 | 885.8K | 23.6K | 31.6M | 541.7K | 210M | 2.9M | 1.1 M | 64.3 M |
zh_Latn | 739.4K | 602 | 10.7M | 45.1K | 739.4K | 602 | 3.4M | 2.3K | 2.0 M | 969.9 M |
arn | 2.4K | 593 | 64.5K | 26.2K | 1.5M | 541.9K | 10.2M | 3.7M | 1.2 M | 3.7 M |
lrc | 42.4K | 587 | 351.9K | 9K | 17.3M | 248.9K | 85.3M | 1.4M | 646.9 K | 37.5 M |
rwo | 938 | 572 | 938 | 45.5K | 734.8K | 590.4K | 5.1M | 4.2M | 1.1 M | 1.4 M |
hus | 825 | 569 | 26.5K | 23.7K | 733.4K | 542.1K | 4.4M | 3.1M | 967.6 K | 1.3 M |
bum | 4.7K | 559 | 103.8K | 36.5K | 3M | 805.5K | 18.8M | 4M | 1.3 M | 6.1 M |
mak | 1K | 555 | 32.5K | 20.4K | 761K | 457.4K | 6.1M | 3.7M | 1.1 M | 2.0 M |
frp | 148K | 550 | 3.5M | 8.2K | 71.2M | 230.2K | 535.4M | 1.4M | 518.3 K | 129.7 M |
seh | 5.6K | 545 | 68.8K | 37.2K | 2M | 650.6K | 14.9M | 4.9M | 1.5 M | 4.4 M |
twu | 2.5K | 539 | 109.9K | 24.4K | 2.4M | 571.2K | 14.2M | 3.2M | 1.0 M | 4.8 M |
kmb | 1.3K | 538 | 60.4K | 36.9K | 1.4M | 810.8K | 8.4M | 4.6M | 1.4 M | 2.6 M |
ksw | 560 | 536 | 16.1K | 16K | 219.9K | 218.8K | 2.9M | 2.9M | 1.4 M | 1.4 M |
sja | 1.3K | 527 | 67.7K | 24.9K | 982.5K | 459.3K | 7.7M | 3.4M | 1.1 M | 2.6 M |
amu | 1.8K | 511 | 72K | 25.2K | 1.5M | 443.3K | 9.6M | 3.2M | 1.0 M | 3.4 M |
mad | 103.8K | 509 | 500.6K | 18.5K | 16.2M | 386.7K | 111.8M | 2.8M | 960.3 K | 34.2 M |
quh | 1K | 501 | 42K | 29.9K | 624.4K | 396.8K | 5.8M | 3.7M | 1.2 M | 1.8 M |
dyu | 1.2K | 483 | 55.8K | 19.7K | 1.2M | 421.8K | 5.7M | 2M | 665.5 K | 1.9 M |
toj | 736 | 452 | 736 | 26.1K | 691.2K | 540.2K | 4.3M | 3.3M | 1.0 M | 1.3 M |
ch | 12.9K | 449 | 147.5K | 16K | 8.9M | 393.9K | 63.5M | 2.5M | 906.8 K | 10.0 M |
sus | 664 | 437 | 664 | 15.2K | 648K | 402.8K | 3.7M | 2.1M | 674.0 K | 1.0 M |
nog | 970 | 419 | 970 | 11K | 330.3K | 200.4K | 2.6M | 1.6M | 714.0 K | 1.2 M |
jam | 12.7K | 416 | 68.5K | 15.8K | 3.5M | 378.4K | 25.8M | 1.7M | 609.5 K | 7.6 M |
gui | 1.1K | 409 | 62.7K | 24.8K | 915K | 314K | 6.5M | 2M | 619.3 K | 2.1 M |
nia | 2K | 408 | 2K | 25K | 1.7M | 476.5K | 11.3M | 3.1M | 1.0 M | 3.9 M |
mas | 15.2K | 405 | 216.8K | 17.6K | 6.2M | 390.1K | 42.1M | 3M | 927.5 K | 13.4 M |
bzj | 983 | 404 | 33.6K | 26.4K | 824.3K | 565K | 4.5M | 2.9M | 981.2 K | 1.4 M |
mkn | 956 | 402 | 33.1K | 25.4K | 584.2K | 456.9K | 3.4M | 2.6M | 734.8 K | 1.0 M |
lhu | 46K | 377 | 975K | 15.7K | 29.1M | 441.2K | 208.6M | 2.5M | 623.0 K | 38.8 M |
ctu | 690 | 366 | 35.5K | 20.6K | 646.7K | 352.8K | 3.6M | 2M | 614.9 K | 1.2 M |
kg | 4.7K | 365 | 85.5K | 21.7K | 2.5M | 406.7K | 16.6M | 2.6M | 905.4 K | 5.7 M |
inb | 387 | 343 | 17.3K | 17K | 202.8K | 197K | 2M | 1.9M | 535.2 K | 555.6 K |
guh | 1.9K | 331 | 104.9K | 28.4K | 1.5M | 328.4K | 11.2M | 3M | 789.5 K | 3.5 M |
rn | 8.2K | 323 | 8.2K | 11.1K | 4.5M | 179K | 33.2M | 1.3M | 449.9 K | 11.8 M |
bus | 467 | 322 | 21.4K | 12.1K | 418.4K | 219.2K | 2.1M | 1.1M | 428.8 K | 830.9 K |
mfe | 7.5K | 320 | 198.8K | 18.2K | 4.6M | 374.8K | 26.9M | 2.1M | 716.4 K | 10.1 M |
sda | 1.6K | 317 | 43.2K | 6.2K | 2.5M | 218.3K | 15.8M | 1.6M | 529.0 K | 4.7 M |
bi | 71.9K | 311 | 308.5K | 13.6K | 19.4M | 359.4K | 132.4M | 1.9M | 546.9 K | 42.6 M |
cr_Latn | 19K | 303 | 170K | 8.9K | 19K | 303 | 81.8K | 1K | 590.4 K | 15.0 M |
gor | 1.7K | 303 | 53.3K | 6.5K | 1.4M | 227.1K | 9.4M | 1.7M | 494.0 K | 3.1 M |
jac | 8.2K | 303 | 61.6K | 11.9K | 1.8M | 271K | 15.7M | 1.7M | 530.3 K | 7.3 M |
chr | 964 | 301 | 33.8K | 7.5K | 629.9K | 172.3K | 4.7M | 1M | 564.1 K | 2.1 M |
mh | 4.6K | 296 | 235.1K | 13K | 3.6M | 393.5K | 24.9M | 2.2M | 778.4 K | 8.4 M |
mni | 1.2K | 290 | 38.1K | 13.2K | 841.3K | 245.5K | 6.4M | 1.8M | 866.6 K | 3.0 M |
wal | 2.6K | 286 | 128K | 14K | 2M | 203.4K | 17M | 1.7M | 525.7 K | 5.1 M |
teo | 2.8K | 274 | 131.5K | 13.7K | 2.3M | 221.4K | 15.3M | 1.6M | 564.9 K | 5.3 M |
gub | 31.7K | 271 | 160.4K | 25K | 4.7M | 286.2K | 44.7M | 1.6M | 431.3 K | 23.1 M |
qvi | 1.2K | 266 | 48.4K | 19.3K | 720.4K | 248.9K | 6.5M | 2.3M | 641.2 K | 1.9 M |
tdx | 1.7K | 262 | 26.3K | 13.2K | 1M | 238.5K | 7M | 1.6M | 503.6 K | 2.1 M |
rki | 331 | 251 | 331 | 7.8K | 119.7K | 113.7K | 1.6M | 1.5M | 751.3 K | 781.8 K |
djk | 560 | 246 | 30.9K | 24.4K | 669.5K | 455.6K | 3.7M | 2.2M | 644.3 K | 1.0 M |
nr | 10.7K | 246 | 10.7K | 11.3K | 5.3M | 162.5K | 49M | 1.5M | 519.7 K | 17.8 M |
zne | 1.3K | 239 | 61.9K | 21.3K | 1.4M | 504.6K | 8.2M | 2.8M | 882.3 K | 2.8 M |
izz | 423 | 237 | 21.7K | 14.5K | 382.8K | 194.5K | 2.1M | 1.1M | 382.2 K | 789.9 K |
noa | 902 | 234 | 902 | 11.5K | 821.1K | 243.9K | 5.2M | 1.6M | 534.3 K | 1.7 M |
bqc | 275 | 228 | 9.8K | 8.2K | 193K | 151.7K | 997K | 788.4K | 317.0 K | 408.1 K |
srm | 847 | 227 | 847 | 17.3K | 1.2M | 445.3K | 6.3M | 2M | 613.4 K | 1.7 M |
niq | 26.7K | 226 | 26.7K | 4.2K | 9.9M | 103.4K | 72.1M | 716.2K | 239.1 K | 20.9 M |
bas | 4.2K | 216 | 105.2K | 14.9K | 4.3M | 362.8K | 25.7M | 1.7M | 600.7 K | 7.6 M |
dwr | 452 | 215 | 22.1K | 11.1K | 269.4K | 139.5K | 2.2M | 1.2M | 375.4 K | 747.6 K |
guc | 537 | 214 | 22.9K | 12.5K | 422.4K | 218.1K | 3.4M | 1.8M | 540.1 K | 1.1 M |
jvn | 1K | 213 | 36.2K | 7.8K | 790.5K | 185.6K | 5.3M | 1.2M | 357.2 K | 1.7 M |
hvn | 737 | 200 | 33.9K | 7K | 779.7K | 239.4K | 4.3M | 1.2M | 378.5 K | 1.4 M |
sxn | 587 | 197 | 587 | 9.9K | 494K | 220.6K | 3.4M | 1.5M | 507.1 K | 1.2 M |
koi | 20.7K | 196 | 153.9K | 5K | 2.2M | 89.9K | 17.1M | 664.5K | 323.0 K | 7.1 M |
alz | 2.2K | 195 | 59.3K | 12.2K | 1.3M | 246.9K | 7.9M | 1.4M | 488.1 K | 2.9 M |
nyu | 1.2K | 195 | 1.2K | 11K | 988.7K | 210.5K | 7.7M | 1.6M | 492.6 K | 2.2 M |
bn_Latn | 98.7K | 191 | 1.3M | 12K | 98.7K | 191 | 458K | 730 | 314.7 K | 81.0 M |
suz | 226 | 186 | 226 | 11.3K | 169.6K | 140.5K | 1M | 855.2K | 339.5 K | 429.6 K |
pau | 1.7K | 185 | 1.7K | 13.1K | 2M | 394.6K | 12.4M | 2M | 600.1 K | 3.2 M |
nij | 1K | 183 | 1K | 9.2K | 741.6K | 186.1K | 4.7M | 1.2M | 389.6 K | 1.6 M |
sat_Latn | 39K | 183 | 39K | 5.5K | 39K | 183 | 183.8K | 601 | 276.1 K | 39.2 M |
gu_Latn | 58.2K | 179 | 688.4K | 5.4K | 58.2K | 179 | 260.8K | 673 | 241.0 K | 47.9 M |
msm | 520 | 177 | 520 | 8.6K | 410.8K | 190.5K | 2.5M | 1.1M | 339.7 K | 789.8 K |
maz | 585 | 170 | 21.3K | 8.2K | 452.9K | 174K | 2.9M | 951.7K | 304.7 K | 971.4 K |
qxr | 2.6K | 153 | 40.8K | 6.4K | 761.5K | 75.4K | 6.6M | 724K | 186.4 K | 1.9 M |
shp | 874 | 150 | 22.4K | 3.7K | 534.1K | 96.8K | 3.8M | 710.4K | 216.9 K | 1.2 M |
hne | 3K | 146 | 118.4K | 4.3K | 2.3M | 139.3K | 12M | 697K | 379.3 K | 6.5 M |
ktu | 3.3K | 144 | 115.5K | 7.8K | 3.2M | 196.9K | 18.5M | 1.1M | 300.1 K | 5.4 M |
laj | 6.5K | 144 | 61K | 6.4K | 2.4M | 140.1K | 15.8M | 730.5K | 233.5 K | 4.6 M |
pis | 1.1K | 139 | 62K | 7.2K | 1.3M | 136.8K | 7.7M | 764K | 212.7 K | 2.2 M |
mag | 631 | 138 | 62.6K | 22.1K | 2.1M | 544.2K | 10.7M | 2.6M | 1.4 M | 5.4 M |
gbm | 2.5K | 137 | 50.8K | 3.8K | 1.7M | 99.7K | 9.1M | 499.6K | 282.4 K | 4.5 M |
tzj | 471 | 136 | 11.1K | 7.3K | 299.9K | 150.8K | 1.9M | 884.2K | 272.0 K | 663.9 K |
oj | 2.5K | 135 | 2.5K | 1.6K | 1.2M | 35.9K | 9.6M | 337.1K | 117.6 K | 3.4 M |
ndc_ZW | 2.2K | 132 | 2.2K | 8.7K | 2.2K | 132 | 9.1K | 523 | 343.1 K | 2.2 M |
tks | 63.7K | 127 | 63.7K | 6.8K | 17.1M | 41.5K | 88.9M | 260.8K | 39.5 K | 33.0 M |
awa | 5.8K | 126 | 100.1K | 8.4K | 2.2M | 98.7K | 11.1M | 475K | 226.6 K | 5.8 M |
gvl | 37.9K | 126 | 213K | 6.9K | 21.1M | 161.1K | 141M | 789.2K | 257.8 K | 31.7 M |
knj | 229 | 126 | 10.1K | 9.2K | 202.6K | 171.8K | 1.1M | 855K | 253.1 K | 345.4 K |
spp | 733 | 123 | 733 | 5.8K | 902.7K | 141.8K | 4.4M | 682.5K | 217.8 K | 1.4 M |
mqy | 69.3K | 119 | 309K | 2.5K | 12.1M | 88.6K | 78.9M | 506.5K | 170.4 K | 16.3 M |
tca | 410 | 117 | 20K | 7.3K | 283K | 121.5K | 2.3M | 786K | 226.2 K | 781.2 K |
cce | 847 | 116 | 23.2K | 11K | 539.3K | 227.2K | 3.3M | 1.3M | 393.8 K | 1.1 M |
skr | 3.8K | 107 | 279.3K | 17.1K | 6.2M | 324K | 32.2M | 1.7M | 768.5 K | 15.4 M |
kmz_Latn | 24K | 106 | 361K | 2.4K | 24K | 106 | 108.6K | 401 | 231.8 K | 16.7 M |
dje | 913 | 100 | 40.2K | 3.7K | 816.3K | 97.5K | 4.7M | 480.7K | 161.2 K | 1.5 M |
gof | 2.8K | 97 | 33.8K | 5.5K | 703K | 68.8K | 5.5M | 506K | 159.1 K | 1.7 M |
agr | 465 | 93 | 16.1K | 3.6K | 295.4K | 67.2K | 2.3M | 554.5K | 177.0 K | 760.1 K |
qvz | 534 | 88 | 6.8K | 3.5K | 145.5K | 50.5K | 1.2M | 438.3K | 124.2 K | 382.7 K |
adh | 2.6K | 87 | 107.2K | 1K | 2.4M | 42.1K | 14.5M | 254.9K | 84.6 K | 5.0 M |
quf | 522 | 86 | 8.4K | 5.2K | 155.7K | 61.8K | 1.5M | 609K | 173.7 K | 542.8 K |
kjg | 113 | 84 | 3K | 2.9K | 67.6K | 67K | 408.5K | 399K | 159.2 K | 167.7 K |
tsc | 12.6K | 82 | 12.6K | 4K | 3.5M | 93.1K | 23.4M | 521.3K | 161.9 K | 7.0 M |
ber | 2.7K | 79 | 12.6K | 1.2K | 1.1M | 46.4K | 6.4M | 265.9K | 141.5 K | 3.0 M |
ify | 611 | 79 | 19.8K | 2.8K | 422.7K | 56.2K | 2.6M | 334K | 109.5 K | 913.1 K |
cbk | 10.1K | 78 | 43.8K | 2K | 1.7M | 64.3K | 10.3M | 339.3K | 93.4 K | 3.4 M |
quy | 588 | 78 | 28.1K | 2.7K | 423.3K | 37.3K | 4.5M | 368.2K | 114.5 K | 1.2 M |
ahk | 244 | 77 | 6.2K | 4.1K | 264K | 124.8K | 1.3M | 715.5K | 182.8 K | 359.7 K |
cac | 212 | 77 | 3.4K | 1.8K | 125.7K | 54.1K | 978.7K | 319.8K | 95.8 K | 280.3 K |
akb | 1K | 71 | 21.3K | 408 | 870.9K | 54.5K | 5.2M | 337.8K | 93.7 K | 1.6 M |
nut | 29K | 67 | 29K | 1.5K | 4.8M | 39.8K | 23.5M | 184.1K | 36.4 K | 8.3 M |
ffm | 1.8K | 65 | 30.1K | 2K | 745.6K | 39.1K | 4.6M | 236.1K | 83.8 K | 1.8 M |
taj | 146 | 65 | 21.6K | 14.3K | 309.7K | 203K | 2.3M | 1.4M | 503.0 K | 872.7 K |
ms_Arab | 698 | 63 | 698 | 320 | 698 | 63 | 2.9K | 239 | 64.7 K | 1016.0 K |
brx | 322 | 62 | 5.3K | 2.4K | 144.2K | 41K | 1.1M | 304.4K | 146.6 K | 515.7 K |
ann | 464 | 56 | 5K | 1.6K | 116.4K | 35.9K | 760.9K | 215.1K | 74.9 K | 295.2 K |
qup | 169 | 53 | 4.3K | 2.5K | 77.5K | 31.3K | 763.8K | 297.8K | 74.7 K | 207.3 K |
ms_Arab_BN | 2.6K | 46 | 2.6K | 374 | 2.6K | 46 | 10.5K | 171 | 50.0 K | 5.1 M |
miq | 236 | 45 | 6.4K | 3.5K | 183.7K | 80.2K | 1.2M | 485.6K | 157.6 K | 384.1 K |
msb | 811 | 41 | 811 | 1K | 705.9K | 28.8K | 4.4M | 167.5K | 53.3 K | 1.7 M |
bim | 410 | 40 | 31.1K | 6.3K | 669.8K | 167.4K | 3.2M | 793.4K | 252.7 K | 1.1 M |
raj | 1.8K | 40 | 1.8K | 5.7K | 1.3M | 81.1K | 7.1M | 405K | 226.2 K | 3.9 M |
kwi | 382 | 37 | 16.9K | 2.2K | 253.8K | 23.4K | 1.8M | 172.8K | 47.6 K | 536.2 K |
tll | 200 | 37 | 200 | 2.7K | 304.2K | 62.2K | 2.2M | 409.8K | 132.3 K | 664.5 K |
trp | 12.8K | 36 | 12.8K | 1.7K | 4.1M | 39K | 29.9M | 257.3K | 87.5 K | 10.2 M |
smt | 1.4K | 34 | 1.4K | 703 | 1M | 36.5K | 6.8M | 245.4K | 87.9 K | 2.5 M |
mrw | 11.3K | 29 | 11.3K | 1K | 4.2M | 45.7K | 27.8M | 257.2K | 81.3 K | 8.8 M |
dln | 236 | 28 | 5.2K | 969 | 150.8K | 21.5K | 860.5K | 118.3K | 36.8 K | 280.3 K |
qvc | 3.4K | 27 | 14.6K | 2.2K | 495.7K | 25.7K | 5M | 233.7K | 65.3 K | 2.6 M |
doi | 1.7K | 26 | 21.8K | 975 | 568.7K | 25.5K | 3.2M | 135.3K | 66.7 K | 1.6 M |
ff | 13.6K | 26 | 150K | 5K | 3.4M | 46.5K | 22.8M | 277.6K | 78.8 K | 8.5 M |
## Citation Information
~~~
@misc{kudugunta2023madlad400,
title={MADLAD-400: A Multilingual And Document-Level Large Audited Dataset},
author={Sneha Kudugunta and Isaac Caswell and Biao Zhang and Xavier Garcia and Christopher A. Choquette-Choo and Katherine Lee and Derrick Xin and Aditya Kusupati and Romi Stella and Ankur Bapna and Orhan Firat},
year={2023},
eprint={2309.04662},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
~~~ | Symato/madlad-400_vi | [
"task_categories:text-generation",
"size_categories:n>1T",
"license:odc-by",
"arxiv:2309.04662",
"arxiv:2010.14571",
"arxiv:2103.12028",
"region:us"
]
| 2023-11-08T22:00:58+00:00 | {"license": "odc-by", "size_categories": ["n>1T"], "task_categories": ["text-generation"]} | 2023-11-08T23:01:36+00:00 | [
"2309.04662",
"2010.14571",
"2103.12028"
]
| []
| TAGS
#task_categories-text-generation #size_categories-n>1T #license-odc-by #arxiv-2309.04662 #arxiv-2010.14571 #arxiv-2103.12028 #region-us
| MADLAD-400
==========
Dataset and Introduction
------------------------
MADLAD-400 (*Multilingual Audited Dataset: Low-resource And Document-level*) is
a document-level multilingual dataset based on Common Crawl, covering 419
languages in total. This uses all snapshots of CommonCrawl available as of August
1, 2022. The primary advantage of this dataset over similar datasets is that it
is more multilingual (419 languages), it is audited and more highly filtered,
and it is document-level. The main disadvantage is also its strength -- being
more filtered, it may lack the recall needed for some applications.
There are two versions released: the noisy dataset, which has no filtering
except document-level LangID, and the clean dataset, which has a variety of
filters applied, though it naturally has a fair amount of noise itself. Each
dataset is released in a document-level form that has been deduplicated.
Loading
-------
You can load both the clean and noisy versions of any language by specifing its LangID:
```
madlad_abt = load_dataset("allenai/madlad-400", "abt")
```
A list of langagues can also be supplied with a keyword argument:
```
madlad_multilang = load_dataset("allenai/madlad-400", languages=["abt", "ace"])
```
Additionally, you can load the noisy and clean subsets seperately with the split keyword argument:
```
madlad_multilang_clean = load_dataset("allenai/madlad-400", languages=["abt", "ace"], split="clean")
```
LangID model and Crawl
----------------------
Following Language Id In the Wild, we
trained a Semi-Supervised LangId model (SSLID) on 500 languages. The training
data is as described in that paper, with the differences that 1) training data
is sampled to a temperature of 'T=3' to reduce over-triggering on low-resource
languages; and 2) the data is supplemented with web-crawled data from the same
paper (that has already been through the various filters described therein) in
the hopes that it will increase robustness to web-domain text.
Filtering
---------
Before separating the raw CommonCrawl corpus by LangID, these
filtering steps are done, similar to Raffel et al (2020):
* Discarded any page with fewer than 5 sentences and only retained lines that
contained at least 3 words.
* Removed any line with the word Javascript.
* Removed any page where the phrase “lorem ipsum” appeared.
* Removed any pages containing the phrases "terms of use", "privacy policy",
"cookie policy", "uses cookies", "use of cookies", "use cookies"
* Removed any pages that contained a curly bracket.
* To deduplicate the data set, discarded all but one of any three-sentence span occurring more than once in the data set.
The 'noisy' subset of the data was filtered only by document-level LangID, which
was taken to be the majority sentence-level LangID prediction. The 'clean'
subset removed all documents with a 'percent\_questionable' score greater than
20%. It furthermore removed any document with under 5 sentences.
The 'pct\_questionable' score is simple the percentage of sentences in the input
document that were "questionable". A sentence was considered questionable if any
of the following were true:
* LangID Consistency: the sentence-level LangID does not match the
document-level LangID
* List Case: The sentence has at least 12 tokens, and over 50% percent of
the tokens began in a capital letter.
* Length: The sentence has under 20 characters or over 500 characters
(note: this is a bad heuristic for ideographic languages)
* Danger Chars: Over 20% of the characters in the sentence match
'[0-9{}+/()>]'
* Cursedness: The sentence matches a cursed regex (see below)
### Cursed Substrings
Based on the initial round of data audits, the authors created a heuristic list of
substrings and regexes accounting for a large amount of questionable content.
Keep in mind that these all are fed into the 'pct\_questionable' score -- a
sentence is only excluded from the 'clean' dataset if over 20% of the sentences
in that document are flagged as questionable.
notes about cursed substrings:
* low quality sentences ending in the pipe character were very common. Before
you ask, this was not Devanagari-script text using a Danda.
* The last few regexes are meant to match 'A N T S P E A K', 'List Case', and
weirdly regular text (for instance, lists of shipping labels or country
codes)
### Virama Correction
Many languages using Brahmic Abugida (South and Southeast Asian scripts like
Devanagari, Khmer, etc.) use some variant on the virama character. For whatever
reason, it was found that this character was often messed up in the common crawl
snapshots used. Therefore, for the languages 'bn my pa gu or ta te kn ml
si th tl mn lo bo km hi mr ne gom as jv dv bho dz hne ks\_Deva mag mni shn yue zh
ja kjg mnw ksw rki mtr mwr xnr', a special correction step was done.
For these languages, the authors took the list of all virama characters and removed all
unnecessary spaces between each instance of a virama character and the next
character with a regex.
### Myanmar Font Compatibility
Prior to 2019, the most popular font for Burmese websites was the Zawgyi font.
The authors used Myanmar Tools to convert text.
Several scripts, like the Chinese script, Tibetan script, and Thai, do not use
whitespace to separate characters. The languages with this property in this
dataset are 'yue zh ja th lo kjg mnw my shn ksw rki km bo dz'.
Alas, the Length aspect of the 'pct\_questionable' score was calculated using
simplistic whitespace tokenization, and therefore rendered the whole
'pct\_questionable' score invalid for those languages. Therefore, for these
languages, the "clean" data is identical to the "noisy" data (barring Chinese;
see below.)
### Special filters
Chinese had a particular issue with pornographic content. After manual inspection
a list of strings likely to be present in pornographic content was developed. All
pages containing at least one of these strings were removed. Resulted in 17%
reduction in number of documents and 56% reduction in file size.
A few more random notes, comparing to common alternative codes for these
languages:
* 'fil' for Filipino/Tagalog, not 'tl'
* 'ak' for Twi/Akan, rather than 'tw'. This includes Fante.
* Unfortunately use the macro code 'chm' for Meadow Mari (instead of the
correct 'mhr'), and 'mrj' for Hill Mari
* 'no' for Norwegian Bokmål, whereas some resources use
'nb'
* 'ps' for Pashto instead of 'pbt' (Southern Pashto)
* 'ms' for Standard Malay, not 'zlm'
* 'sq' for Albanian, and don't distinguish dialects like
Gheg ('aln') and Tosk ('als')
* 'ber' as the code for Tamazight, after consultation with Tamazight
speakers opining that the dialect distinctions are not significant. Other
resources use the individual codes like 'tzm' and 'kab'.
* Macrocode 'qu' for Quechua. In practice, this seems usually to be
a mix of the Ayacucho and Cusco dialects. Other resources, like NLLB, may
use the dialect code, e.g. 'quy' for Ayacucho Chanka. The same is true for a
few other macro codes, like 'ff' (Macro code for Fulfulde, whereas other
sources may use e.g. 'fuv'.)
* Really, there are notes that can be made about almost any code, from the
well-accepted conventions like 'zh' for Mandarin, to many dialectical notes,
like which variant of Hmong really is the 'hmn' data? But the above ones are
made specifically for ones where the authors are aware of other datasources floating
out there that use different conventions.
Audit
-----
Following Quality at a Glance, the authors performed
an "audit" of every corpus in this dataset. Although the authors did not speak most
languages, they were able to give high-level comments on the general quality. They
looked at a sample of 20 documents of each language.
After an initial round of auditing, they devised a new set of filters and applied
them. They then re-did all audits.
### Overall notes from the audit
The decision was to include languages that looked noisy, but omit any language
that was clearly majority noise, or only had 20 or fewer docs. This is a low
bar -- twenty documents can be very little indeed, and some of the corpora released are quite noisy, but all of them should have at least the potential to
be used in some useful way. The motivation for not releasing nonsense or tiny
datasets is to not give a false sense of how multilingual this dataset actually
is ("Representation washing"), as recommended by Quality at a Glance.
A few overarching points:
* Many low-resource languages only had Bible text, or in some cases URL
data. These are marked in the rows below. Generally 'ok bible' means that
100% of the audited sentences were Biblical, whereas if 'bible' is simply
mentioned in the note, it was not the only source of data.
* Indian languages in the Latin script had a high concentration of
pornographic content.
### Renames and Merges as a result of the Audit
In several cases, it was clear from the audit that the corpora were not in the
languages that the LangID model claimed they were. This led to the following
renames:
* dty renamed to 'zxx-xx-dtynoise', aka a "language" of noise. This is mainly
mis-rendered PDFs and may have some practical applications for decoding
said.
* 'fan' renamed to 'bum'
* 'ss-SZ' renamed to 'ss' -- this was just a result of us having inconsistent
data labels.
* 'cjk' merged into the 'gil' dataset
* 'bjj' merged into the 'awa' dataset
Canaries
--------
Canaries are provided in separate 'canaries' folder. Canaries are organized into three directions: 'monolingual' hosts canaries designed for the MADLAD-400 monody data, 'multiway' for the multiway data, and 'generic' the generic canaries generated only from the model's vocabulary.
* Monolingual: Canaries here are organized by the language the canary was generated from. This corresponds exactly to the 'translate\_copy' setting in the paper, where the source and target language match.
* Multiway: Canaries here are organized in one of two fashions. 'to\_XX' indicates canaries organized by the target language (and where the source language could be any language). 'XX-XX' indicates the canaries (interleaved\_both and interleaved\_mislabeled\_both) designed for a specific pair of languages.
Within each subdirectory above, canaries are into separate files named by the canary type. There is always only a single file for each canary type. The 'generic' folder contains within it the four canary types.
Canaries can be mixed in with normal training data to then be analyzed post-hoc to training
References
----------
Raffel, Colin, et al. "Exploring the limits of transfer learning with a unified
text-to-text transformer." J. Mach. Learn. Res. 21.140 (2020): 1-67.
Contact
-------
Please reach out to {snehakudugunta, icaswell}꩜URL. For questions about the canaries, reach out to cchoquette@URL
License
-------
This data is released with the 'CC-BY-4.0' license.
Detailed notes from the audit
-----------------------------
Here are the notes on all languages, along with the number of documents
found, and the final decision made with respect to including the language in
this dataset.
A few comments too long to fit in the table above:
* 'alt': WAIT THIS IS AMAZING IT IS ACTUALLY ALTAI! e.g. from urls like
URL
* 'tly-IR': They all look like boilerplate content, e.g., list of
keywords/search queries used to bump page ranking in search results. Not any
useful material for translation. Remove.
* 'zap': pls note that at least some Zapotec speakers tend to view it as one
language, not as a million dialects like ISO does. However, some are
certainly mutually unintelligible, complicating the matter.
* 'zh-Latn': The biggest problem is that several examples are not in Latin
Chinese (i.e., romanization in my understanding) but in English or mixed
English and Chinese. For those data in Latin Chinese, their quality seems to
be good.
* 'zh': Many examples are porn-related, particularly those very long
documents. Also, there are some examples of traditional Chinese.
Final Dataset information
-------------------------
The number of documents, sentences, tokens, characters, and bytes for the noisy
and clean splits of the data. Note that the "toks" field below uses whitespace
for tokenization, so is not appropriate for non-whitespace-separating languages
like Chinese (see section above). Note that the english subset in this version
is missing 18% of documents that were included in the published analysis of the dataset.
These documents will be incoporated in an update coming soon.
```
@misc{kudugunta2023madlad400,
title={MADLAD-400: A Multilingual And Document-Level Large Audited Dataset},
author={Sneha Kudugunta and Isaac Caswell and Biao Zhang and Xavier Garcia and Christopher A. Choquette-Choo and Katherine Lee and Derrick Xin and Aditya Kusupati and Romi Stella and Ankur Bapna and Orhan Firat},
year={2023},
eprint={2309.04662},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
"### Cursed Substrings\n\n\nBased on the initial round of data audits, the authors created a heuristic list of\nsubstrings and regexes accounting for a large amount of questionable content.\nKeep in mind that these all are fed into the 'pct\\_questionable' score -- a\nsentence is only excluded from the 'clean' dataset if over 20% of the sentences\nin that document are flagged as questionable.\n\n\nnotes about cursed substrings:\n\n\n* low quality sentences ending in the pipe character were very common. Before\nyou ask, this was not Devanagari-script text using a Danda.\n* The last few regexes are meant to match 'A N T S P E A K', 'List Case', and\nweirdly regular text (for instance, lists of shipping labels or country\ncodes)",
"### Virama Correction\n\n\nMany languages using Brahmic Abugida (South and Southeast Asian scripts like\nDevanagari, Khmer, etc.) use some variant on the virama character. For whatever\nreason, it was found that this character was often messed up in the common crawl\nsnapshots used. Therefore, for the languages 'bn my pa gu or ta te kn ml\nsi th tl mn lo bo km hi mr ne gom as jv dv bho dz hne ks\\_Deva mag mni shn yue zh\nja kjg mnw ksw rki mtr mwr xnr', a special correction step was done.\n\n\nFor these languages, the authors took the list of all virama characters and removed all\nunnecessary spaces between each instance of a virama character and the next\ncharacter with a regex.",
"### Myanmar Font Compatibility\n\n\nPrior to 2019, the most popular font for Burmese websites was the Zawgyi font.\nThe authors used Myanmar Tools to convert text.\n\n\nSeveral scripts, like the Chinese script, Tibetan script, and Thai, do not use\nwhitespace to separate characters. The languages with this property in this\ndataset are 'yue zh ja th lo kjg mnw my shn ksw rki km bo dz'.\n\n\nAlas, the Length aspect of the 'pct\\_questionable' score was calculated using\nsimplistic whitespace tokenization, and therefore rendered the whole\n'pct\\_questionable' score invalid for those languages. Therefore, for these\nlanguages, the \"clean\" data is identical to the \"noisy\" data (barring Chinese;\nsee below.)",
"### Special filters\n\n\nChinese had a particular issue with pornographic content. After manual inspection\na list of strings likely to be present in pornographic content was developed. All\npages containing at least one of these strings were removed. Resulted in 17%\nreduction in number of documents and 56% reduction in file size.\n\n\nA few more random notes, comparing to common alternative codes for these\nlanguages:\n\n\n* 'fil' for Filipino/Tagalog, not 'tl'\n* 'ak' for Twi/Akan, rather than 'tw'. This includes Fante.\n* Unfortunately use the macro code 'chm' for Meadow Mari (instead of the\ncorrect 'mhr'), and 'mrj' for Hill Mari\n* 'no' for Norwegian Bokmål, whereas some resources use\n'nb'\n* 'ps' for Pashto instead of 'pbt' (Southern Pashto)\n* 'ms' for Standard Malay, not 'zlm'\n* 'sq' for Albanian, and don't distinguish dialects like\nGheg ('aln') and Tosk ('als')\n* 'ber' as the code for Tamazight, after consultation with Tamazight\nspeakers opining that the dialect distinctions are not significant. Other\nresources use the individual codes like 'tzm' and 'kab'.\n* Macrocode 'qu' for Quechua. In practice, this seems usually to be\na mix of the Ayacucho and Cusco dialects. Other resources, like NLLB, may\nuse the dialect code, e.g. 'quy' for Ayacucho Chanka. The same is true for a\nfew other macro codes, like 'ff' (Macro code for Fulfulde, whereas other\nsources may use e.g. 'fuv'.)\n* Really, there are notes that can be made about almost any code, from the\nwell-accepted conventions like 'zh' for Mandarin, to many dialectical notes,\nlike which variant of Hmong really is the 'hmn' data? But the above ones are\nmade specifically for ones where the authors are aware of other datasources floating\nout there that use different conventions.\n\n\nAudit\n-----\n\n\nFollowing Quality at a Glance, the authors performed\nan \"audit\" of every corpus in this dataset. Although the authors did not speak most\nlanguages, they were able to give high-level comments on the general quality. They\nlooked at a sample of 20 documents of each language.\n\n\nAfter an initial round of auditing, they devised a new set of filters and applied\nthem. They then re-did all audits.",
"### Overall notes from the audit\n\n\nThe decision was to include languages that looked noisy, but omit any language\nthat was clearly majority noise, or only had 20 or fewer docs. This is a low\nbar -- twenty documents can be very little indeed, and some of the corpora released are quite noisy, but all of them should have at least the potential to\nbe used in some useful way. The motivation for not releasing nonsense or tiny\ndatasets is to not give a false sense of how multilingual this dataset actually\nis (\"Representation washing\"), as recommended by Quality at a Glance.\n\n\nA few overarching points:\n\n\n* Many low-resource languages only had Bible text, or in some cases URL\ndata. These are marked in the rows below. Generally 'ok bible' means that\n100% of the audited sentences were Biblical, whereas if 'bible' is simply\nmentioned in the note, it was not the only source of data.\n* Indian languages in the Latin script had a high concentration of\npornographic content.",
"### Renames and Merges as a result of the Audit\n\n\nIn several cases, it was clear from the audit that the corpora were not in the\nlanguages that the LangID model claimed they were. This led to the following\nrenames:\n\n\n* dty renamed to 'zxx-xx-dtynoise', aka a \"language\" of noise. This is mainly\nmis-rendered PDFs and may have some practical applications for decoding\nsaid.\n* 'fan' renamed to 'bum'\n* 'ss-SZ' renamed to 'ss' -- this was just a result of us having inconsistent\ndata labels.\n* 'cjk' merged into the 'gil' dataset\n* 'bjj' merged into the 'awa' dataset\n\n\nCanaries\n--------\n\n\nCanaries are provided in separate 'canaries' folder. Canaries are organized into three directions: 'monolingual' hosts canaries designed for the MADLAD-400 monody data, 'multiway' for the multiway data, and 'generic' the generic canaries generated only from the model's vocabulary.\n\n\n* Monolingual: Canaries here are organized by the language the canary was generated from. This corresponds exactly to the 'translate\\_copy' setting in the paper, where the source and target language match.\n* Multiway: Canaries here are organized in one of two fashions. 'to\\_XX' indicates canaries organized by the target language (and where the source language could be any language). 'XX-XX' indicates the canaries (interleaved\\_both and interleaved\\_mislabeled\\_both) designed for a specific pair of languages.\n\n\nWithin each subdirectory above, canaries are into separate files named by the canary type. There is always only a single file for each canary type. The 'generic' folder contains within it the four canary types.\n\n\nCanaries can be mixed in with normal training data to then be analyzed post-hoc to training\n\n\nReferences\n----------\n\n\nRaffel, Colin, et al. \"Exploring the limits of transfer learning with a unified\ntext-to-text transformer.\" J. Mach. Learn. Res. 21.140 (2020): 1-67.\n\n\nContact\n-------\n\n\nPlease reach out to {snehakudugunta, icaswell}꩜URL. For questions about the canaries, reach out to cchoquette@URL\n\n\nLicense\n-------\n\n\nThis data is released with the 'CC-BY-4.0' license.\n\n\nDetailed notes from the audit\n-----------------------------\n\n\nHere are the notes on all languages, along with the number of documents\nfound, and the final decision made with respect to including the language in\nthis dataset.\n\n\n\nA few comments too long to fit in the table above:\n\n\n* 'alt': WAIT THIS IS AMAZING IT IS ACTUALLY ALTAI! e.g. from urls like\nURL\n* 'tly-IR': They all look like boilerplate content, e.g., list of\nkeywords/search queries used to bump page ranking in search results. Not any\nuseful material for translation. Remove.\n* 'zap': pls note that at least some Zapotec speakers tend to view it as one\nlanguage, not as a million dialects like ISO does. However, some are\ncertainly mutually unintelligible, complicating the matter.\n* 'zh-Latn': The biggest problem is that several examples are not in Latin\nChinese (i.e., romanization in my understanding) but in English or mixed\nEnglish and Chinese. For those data in Latin Chinese, their quality seems to\nbe good.\n* 'zh': Many examples are porn-related, particularly those very long\ndocuments. Also, there are some examples of traditional Chinese.\n\n\nFinal Dataset information\n-------------------------\n\n\nThe number of documents, sentences, tokens, characters, and bytes for the noisy\nand clean splits of the data. Note that the \"toks\" field below uses whitespace\nfor tokenization, so is not appropriate for non-whitespace-separating languages\nlike Chinese (see section above). Note that the english subset in this version\nis missing 18% of documents that were included in the published analysis of the dataset.\nThese documents will be incoporated in an update coming soon.\n\n\n\n\n```\n@misc{kudugunta2023madlad400,\n title={MADLAD-400: A Multilingual And Document-Level Large Audited Dataset}, \n author={Sneha Kudugunta and Isaac Caswell and Biao Zhang and Xavier Garcia and Christopher A. Choquette-Choo and Katherine Lee and Derrick Xin and Aditya Kusupati and Romi Stella and Ankur Bapna and Orhan Firat},\n year={2023},\n eprint={2309.04662},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n\n```"
]
| [
"TAGS\n#task_categories-text-generation #size_categories-n>1T #license-odc-by #arxiv-2309.04662 #arxiv-2010.14571 #arxiv-2103.12028 #region-us \n",
"### Cursed Substrings\n\n\nBased on the initial round of data audits, the authors created a heuristic list of\nsubstrings and regexes accounting for a large amount of questionable content.\nKeep in mind that these all are fed into the 'pct\\_questionable' score -- a\nsentence is only excluded from the 'clean' dataset if over 20% of the sentences\nin that document are flagged as questionable.\n\n\nnotes about cursed substrings:\n\n\n* low quality sentences ending in the pipe character were very common. Before\nyou ask, this was not Devanagari-script text using a Danda.\n* The last few regexes are meant to match 'A N T S P E A K', 'List Case', and\nweirdly regular text (for instance, lists of shipping labels or country\ncodes)",
"### Virama Correction\n\n\nMany languages using Brahmic Abugida (South and Southeast Asian scripts like\nDevanagari, Khmer, etc.) use some variant on the virama character. For whatever\nreason, it was found that this character was often messed up in the common crawl\nsnapshots used. Therefore, for the languages 'bn my pa gu or ta te kn ml\nsi th tl mn lo bo km hi mr ne gom as jv dv bho dz hne ks\\_Deva mag mni shn yue zh\nja kjg mnw ksw rki mtr mwr xnr', a special correction step was done.\n\n\nFor these languages, the authors took the list of all virama characters and removed all\nunnecessary spaces between each instance of a virama character and the next\ncharacter with a regex.",
"### Myanmar Font Compatibility\n\n\nPrior to 2019, the most popular font for Burmese websites was the Zawgyi font.\nThe authors used Myanmar Tools to convert text.\n\n\nSeveral scripts, like the Chinese script, Tibetan script, and Thai, do not use\nwhitespace to separate characters. The languages with this property in this\ndataset are 'yue zh ja th lo kjg mnw my shn ksw rki km bo dz'.\n\n\nAlas, the Length aspect of the 'pct\\_questionable' score was calculated using\nsimplistic whitespace tokenization, and therefore rendered the whole\n'pct\\_questionable' score invalid for those languages. Therefore, for these\nlanguages, the \"clean\" data is identical to the \"noisy\" data (barring Chinese;\nsee below.)",
"### Special filters\n\n\nChinese had a particular issue with pornographic content. After manual inspection\na list of strings likely to be present in pornographic content was developed. All\npages containing at least one of these strings were removed. Resulted in 17%\nreduction in number of documents and 56% reduction in file size.\n\n\nA few more random notes, comparing to common alternative codes for these\nlanguages:\n\n\n* 'fil' for Filipino/Tagalog, not 'tl'\n* 'ak' for Twi/Akan, rather than 'tw'. This includes Fante.\n* Unfortunately use the macro code 'chm' for Meadow Mari (instead of the\ncorrect 'mhr'), and 'mrj' for Hill Mari\n* 'no' for Norwegian Bokmål, whereas some resources use\n'nb'\n* 'ps' for Pashto instead of 'pbt' (Southern Pashto)\n* 'ms' for Standard Malay, not 'zlm'\n* 'sq' for Albanian, and don't distinguish dialects like\nGheg ('aln') and Tosk ('als')\n* 'ber' as the code for Tamazight, after consultation with Tamazight\nspeakers opining that the dialect distinctions are not significant. Other\nresources use the individual codes like 'tzm' and 'kab'.\n* Macrocode 'qu' for Quechua. In practice, this seems usually to be\na mix of the Ayacucho and Cusco dialects. Other resources, like NLLB, may\nuse the dialect code, e.g. 'quy' for Ayacucho Chanka. The same is true for a\nfew other macro codes, like 'ff' (Macro code for Fulfulde, whereas other\nsources may use e.g. 'fuv'.)\n* Really, there are notes that can be made about almost any code, from the\nwell-accepted conventions like 'zh' for Mandarin, to many dialectical notes,\nlike which variant of Hmong really is the 'hmn' data? But the above ones are\nmade specifically for ones where the authors are aware of other datasources floating\nout there that use different conventions.\n\n\nAudit\n-----\n\n\nFollowing Quality at a Glance, the authors performed\nan \"audit\" of every corpus in this dataset. Although the authors did not speak most\nlanguages, they were able to give high-level comments on the general quality. They\nlooked at a sample of 20 documents of each language.\n\n\nAfter an initial round of auditing, they devised a new set of filters and applied\nthem. They then re-did all audits.",
"### Overall notes from the audit\n\n\nThe decision was to include languages that looked noisy, but omit any language\nthat was clearly majority noise, or only had 20 or fewer docs. This is a low\nbar -- twenty documents can be very little indeed, and some of the corpora released are quite noisy, but all of them should have at least the potential to\nbe used in some useful way. The motivation for not releasing nonsense or tiny\ndatasets is to not give a false sense of how multilingual this dataset actually\nis (\"Representation washing\"), as recommended by Quality at a Glance.\n\n\nA few overarching points:\n\n\n* Many low-resource languages only had Bible text, or in some cases URL\ndata. These are marked in the rows below. Generally 'ok bible' means that\n100% of the audited sentences were Biblical, whereas if 'bible' is simply\nmentioned in the note, it was not the only source of data.\n* Indian languages in the Latin script had a high concentration of\npornographic content.",
"### Renames and Merges as a result of the Audit\n\n\nIn several cases, it was clear from the audit that the corpora were not in the\nlanguages that the LangID model claimed they were. This led to the following\nrenames:\n\n\n* dty renamed to 'zxx-xx-dtynoise', aka a \"language\" of noise. This is mainly\nmis-rendered PDFs and may have some practical applications for decoding\nsaid.\n* 'fan' renamed to 'bum'\n* 'ss-SZ' renamed to 'ss' -- this was just a result of us having inconsistent\ndata labels.\n* 'cjk' merged into the 'gil' dataset\n* 'bjj' merged into the 'awa' dataset\n\n\nCanaries\n--------\n\n\nCanaries are provided in separate 'canaries' folder. Canaries are organized into three directions: 'monolingual' hosts canaries designed for the MADLAD-400 monody data, 'multiway' for the multiway data, and 'generic' the generic canaries generated only from the model's vocabulary.\n\n\n* Monolingual: Canaries here are organized by the language the canary was generated from. This corresponds exactly to the 'translate\\_copy' setting in the paper, where the source and target language match.\n* Multiway: Canaries here are organized in one of two fashions. 'to\\_XX' indicates canaries organized by the target language (and where the source language could be any language). 'XX-XX' indicates the canaries (interleaved\\_both and interleaved\\_mislabeled\\_both) designed for a specific pair of languages.\n\n\nWithin each subdirectory above, canaries are into separate files named by the canary type. There is always only a single file for each canary type. The 'generic' folder contains within it the four canary types.\n\n\nCanaries can be mixed in with normal training data to then be analyzed post-hoc to training\n\n\nReferences\n----------\n\n\nRaffel, Colin, et al. \"Exploring the limits of transfer learning with a unified\ntext-to-text transformer.\" J. Mach. Learn. Res. 21.140 (2020): 1-67.\n\n\nContact\n-------\n\n\nPlease reach out to {snehakudugunta, icaswell}꩜URL. For questions about the canaries, reach out to cchoquette@URL\n\n\nLicense\n-------\n\n\nThis data is released with the 'CC-BY-4.0' license.\n\n\nDetailed notes from the audit\n-----------------------------\n\n\nHere are the notes on all languages, along with the number of documents\nfound, and the final decision made with respect to including the language in\nthis dataset.\n\n\n\nA few comments too long to fit in the table above:\n\n\n* 'alt': WAIT THIS IS AMAZING IT IS ACTUALLY ALTAI! e.g. from urls like\nURL\n* 'tly-IR': They all look like boilerplate content, e.g., list of\nkeywords/search queries used to bump page ranking in search results. Not any\nuseful material for translation. Remove.\n* 'zap': pls note that at least some Zapotec speakers tend to view it as one\nlanguage, not as a million dialects like ISO does. However, some are\ncertainly mutually unintelligible, complicating the matter.\n* 'zh-Latn': The biggest problem is that several examples are not in Latin\nChinese (i.e., romanization in my understanding) but in English or mixed\nEnglish and Chinese. For those data in Latin Chinese, their quality seems to\nbe good.\n* 'zh': Many examples are porn-related, particularly those very long\ndocuments. Also, there are some examples of traditional Chinese.\n\n\nFinal Dataset information\n-------------------------\n\n\nThe number of documents, sentences, tokens, characters, and bytes for the noisy\nand clean splits of the data. Note that the \"toks\" field below uses whitespace\nfor tokenization, so is not appropriate for non-whitespace-separating languages\nlike Chinese (see section above). Note that the english subset in this version\nis missing 18% of documents that were included in the published analysis of the dataset.\nThese documents will be incoporated in an update coming soon.\n\n\n\n\n```\n@misc{kudugunta2023madlad400,\n title={MADLAD-400: A Multilingual And Document-Level Large Audited Dataset}, \n author={Sneha Kudugunta and Isaac Caswell and Biao Zhang and Xavier Garcia and Christopher A. Choquette-Choo and Katherine Lee and Derrick Xin and Aditya Kusupati and Romi Stella and Ankur Bapna and Orhan Firat},\n year={2023},\n eprint={2309.04662},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n\n```"
]
| [
59,
183,
190,
186,
574,
230,
1088
]
| [
"passage: TAGS\n#task_categories-text-generation #size_categories-n>1T #license-odc-by #arxiv-2309.04662 #arxiv-2010.14571 #arxiv-2103.12028 #region-us \n### Cursed Substrings\n\n\nBased on the initial round of data audits, the authors created a heuristic list of\nsubstrings and regexes accounting for a large amount of questionable content.\nKeep in mind that these all are fed into the 'pct\\_questionable' score -- a\nsentence is only excluded from the 'clean' dataset if over 20% of the sentences\nin that document are flagged as questionable.\n\n\nnotes about cursed substrings:\n\n\n* low quality sentences ending in the pipe character were very common. Before\nyou ask, this was not Devanagari-script text using a Danda.\n* The last few regexes are meant to match 'A N T S P E A K', 'List Case', and\nweirdly regular text (for instance, lists of shipping labels or country\ncodes)### Virama Correction\n\n\nMany languages using Brahmic Abugida (South and Southeast Asian scripts like\nDevanagari, Khmer, etc.) use some variant on the virama character. For whatever\nreason, it was found that this character was often messed up in the common crawl\nsnapshots used. Therefore, for the languages 'bn my pa gu or ta te kn ml\nsi th tl mn lo bo km hi mr ne gom as jv dv bho dz hne ks\\_Deva mag mni shn yue zh\nja kjg mnw ksw rki mtr mwr xnr', a special correction step was done.\n\n\nFor these languages, the authors took the list of all virama characters and removed all\nunnecessary spaces between each instance of a virama character and the next\ncharacter with a regex.",
"passage: ### Myanmar Font Compatibility\n\n\nPrior to 2019, the most popular font for Burmese websites was the Zawgyi font.\nThe authors used Myanmar Tools to convert text.\n\n\nSeveral scripts, like the Chinese script, Tibetan script, and Thai, do not use\nwhitespace to separate characters. The languages with this property in this\ndataset are 'yue zh ja th lo kjg mnw my shn ksw rki km bo dz'.\n\n\nAlas, the Length aspect of the 'pct\\_questionable' score was calculated using\nsimplistic whitespace tokenization, and therefore rendered the whole\n'pct\\_questionable' score invalid for those languages. Therefore, for these\nlanguages, the \"clean\" data is identical to the \"noisy\" data (barring Chinese;\nsee below.)",
"passage: ### Special filters\n\n\nChinese had a particular issue with pornographic content. After manual inspection\na list of strings likely to be present in pornographic content was developed. All\npages containing at least one of these strings were removed. Resulted in 17%\nreduction in number of documents and 56% reduction in file size.\n\n\nA few more random notes, comparing to common alternative codes for these\nlanguages:\n\n\n* 'fil' for Filipino/Tagalog, not 'tl'\n* 'ak' for Twi/Akan, rather than 'tw'. This includes Fante.\n* Unfortunately use the macro code 'chm' for Meadow Mari (instead of the\ncorrect 'mhr'), and 'mrj' for Hill Mari\n* 'no' for Norwegian Bokmål, whereas some resources use\n'nb'\n* 'ps' for Pashto instead of 'pbt' (Southern Pashto)\n* 'ms' for Standard Malay, not 'zlm'\n* 'sq' for Albanian, and don't distinguish dialects like\nGheg ('aln') and Tosk ('als')\n* 'ber' as the code for Tamazight, after consultation with Tamazight\nspeakers opining that the dialect distinctions are not significant. Other\nresources use the individual codes like 'tzm' and 'kab'.\n* Macrocode 'qu' for Quechua. In practice, this seems usually to be\na mix of the Ayacucho and Cusco dialects. Other resources, like NLLB, may\nuse the dialect code, e.g. 'quy' for Ayacucho Chanka. The same is true for a\nfew other macro codes, like 'ff' (Macro code for Fulfulde, whereas other\nsources may use e.g. 'fuv'.)\n* Really, there are notes that can be made about almost any code, from the\nwell-accepted conventions like 'zh' for Mandarin, to many dialectical notes,\nlike which variant of Hmong really is the 'hmn' data? But the above ones are\nmade specifically for ones where the authors are aware of other datasources floating\nout there that use different conventions.\n\n\nAudit\n-----\n\n\nFollowing Quality at a Glance, the authors performed\nan \"audit\" of every corpus in this dataset. Although the authors did not speak most\nlanguages, they were able to give high-level comments on the general quality. They\nlooked at a sample of 20 documents of each language.\n\n\nAfter an initial round of auditing, they devised a new set of filters and applied\nthem. They then re-did all audits.### Overall notes from the audit\n\n\nThe decision was to include languages that looked noisy, but omit any language\nthat was clearly majority noise, or only had 20 or fewer docs. This is a low\nbar -- twenty documents can be very little indeed, and some of the corpora released are quite noisy, but all of them should have at least the potential to\nbe used in some useful way. The motivation for not releasing nonsense or tiny\ndatasets is to not give a false sense of how multilingual this dataset actually\nis (\"Representation washing\"), as recommended by Quality at a Glance.\n\n\nA few overarching points:\n\n\n* Many low-resource languages only had Bible text, or in some cases URL\ndata. These are marked in the rows below. Generally 'ok bible' means that\n100% of the audited sentences were Biblical, whereas if 'bible' is simply\nmentioned in the note, it was not the only source of data.\n* Indian languages in the Latin script had a high concentration of\npornographic content."
]
|
90b92a757a38d2da7ebd271c3e600d6ddc90abe4 | # Dataset Card for "allergy_type_bu"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Eitanli/allergy_type_bu | [
"region:us"
]
| 2023-11-08T22:23:04+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "recipe", "dtype": "string"}, {"name": "allergy_type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 108603536, "num_examples": 74465}], "download_size": 55013888, "dataset_size": 108603536}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-10T15:43:31+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "allergy_type_bu"
More Information needed | [
"# Dataset Card for \"allergy_type_bu\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"allergy_type_bu\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"allergy_type_bu\"\n\nMore Information needed"
]
|
94b64caf8708944702ba33f141d021253fbb246d | # Dataset Card for "paradetox-preprocess-maskedComments-without-INSERT-without-punctationComparision"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | HamdanXI/paradetox-preprocess-maskedComments-without-INSERT-without-punctationComparision | [
"region:us"
]
| 2023-11-08T22:45:05+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "en_toxic_comment", "dtype": "string"}, {"name": "en_neutral_comment", "dtype": "string"}, {"name": "edit_ops", "sequence": {"sequence": "string"}}, {"name": "masked_comment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5592956, "num_examples": 19744}], "download_size": 2314734, "dataset_size": 5592956}} | 2023-11-08T22:45:07+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "paradetox-preprocess-maskedComments-without-INSERT-without-punctationComparision"
More Information needed | [
"# Dataset Card for \"paradetox-preprocess-maskedComments-without-INSERT-without-punctationComparision\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"paradetox-preprocess-maskedComments-without-INSERT-without-punctationComparision\"\n\nMore Information needed"
]
| [
6,
35
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"paradetox-preprocess-maskedComments-without-INSERT-without-punctationComparision\"\n\nMore Information needed"
]
|
80d383cdf6526f43c1ae9520c5d850f62724bc69 | # Dataset Card for "rsicd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | danielz01/rsicd | [
"region:us"
]
| 2023-11-08T22:45:25+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "path", "dtype": "string"}, {"name": "img_id", "dtype": "int64"}, {"name": "land_cover", "dtype": "string"}, {"name": "captions", "sequence": "string"}, {"name": "caption_ids", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 460526263.218, "num_examples": 8734}, {"name": "val", "num_bytes": 60256331.322, "num_examples": 1094}], "download_size": 473790206, "dataset_size": 520782594.53999996}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}]} | 2023-11-08T22:45:57+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "rsicd"
More Information needed | [
"# Dataset Card for \"rsicd\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"rsicd\"\n\nMore Information needed"
]
| [
6,
12
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"rsicd\"\n\nMore Information needed"
]
|
31b420eadafbecce59196c62730ddaa7b243ec66 | # Dataset Card for "paradetox-preprocess-1TokenOnly"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | HamdanXI/paradetox-preprocess-1TokenOnly | [
"region:us"
]
| 2023-11-08T23:09:52+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "en_toxic_comment", "dtype": "string"}, {"name": "en_neutral_comment", "dtype": "string"}, {"name": "edit_ops", "sequence": {"sequence": "string"}}, {"name": "masked_comment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1531377.640599676, "num_examples": 5406}], "download_size": 526922, "dataset_size": 1531377.640599676}} | 2023-11-08T23:09:54+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "paradetox-preprocess-1TokenOnly"
More Information needed | [
"# Dataset Card for \"paradetox-preprocess-1TokenOnly\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"paradetox-preprocess-1TokenOnly\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"paradetox-preprocess-1TokenOnly\"\n\nMore Information needed"
]
|
e977ebe17d4c3a2a0fc0a9cd63f4088f11f5fe85 | # Dataset Card for "xView1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | danielz01/xView1 | [
"region:us"
]
| 2023-11-08T23:30:32+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "float64"}}, {"name": "category", "sequence": "int64"}]}, {"name": "path", "dtype": "string"}, {"name": "chip_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 20904228042.0, "num_examples": 26541}], "download_size": 17082075353, "dataset_size": 20904228042.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-09T00:13:21+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "xView1"
More Information needed | [
"# Dataset Card for \"xView1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"xView1\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"xView1\"\n\nMore Information needed"
]
|
14170237ddd77673894fde8171a42a2747c7f4ca | # Dataset Card for "f719c86e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | result-kand2-sdxl-wuerst-karlo/f719c86e | [
"region:us"
]
| 2023-11-08T23:40:51+00:00 | {"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 224, "num_examples": 10}], "download_size": 1403, "dataset_size": 224}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-08T23:40:52+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "f719c86e"
More Information needed | [
"# Dataset Card for \"f719c86e\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"f719c86e\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"f719c86e\"\n\nMore Information needed"
]
|
edf74466d009fa3d0771719ad3e540af4ee69270 | # Dataset Card for "dataset_hadith"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kheder/dataset_hadith | [
"region:us"
]
| 2023-11-08T23:42:39+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "hadith_id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "chapter_no", "dtype": "string"}, {"name": "hadith_no", "dtype": "string"}, {"name": "chapter", "dtype": "string"}, {"name": "chain_indx", "dtype": "string"}, {"name": "text_ar", "dtype": "string"}, {"name": "text_en", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 41709856, "num_examples": 34441}], "download_size": 0, "dataset_size": 41709856}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-08T23:50:28+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dataset_hadith"
More Information needed | [
"# Dataset Card for \"dataset_hadith\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dataset_hadith\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"dataset_hadith\"\n\nMore Information needed"
]
|
9dd182cd9847c3bc11f76a8e25acc48ae5f0663e |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | samuelstevens/bioclip-demo | [
"region:us"
]
| 2023-11-08T23:57:09+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]} | 2023-11-29T02:33:03+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
6,
8,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
3ecc89ed65bdbede85807fa71facae8da4208d5f | # Dataset Card for "dsml_original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | higgsfield/dsml_original | [
"region:us"
]
| 2023-11-09T00:32:09+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 243684731, "num_examples": 32477}], "download_size": 27760890, "dataset_size": 243684731}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-09T22:27:51+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dsml_original"
More Information needed | [
"# Dataset Card for \"dsml_original\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dsml_original\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"dsml_original\"\n\nMore Information needed"
]
|
f13dec05c4dafc3eb4fec5a2171657463f3ffca7 | # Dataset Card for "asag_cleaned_05"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | damand2061/asag_cleaned_05 | [
"region:us"
]
| 2023-11-09T01:01:02+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "Soal", "dtype": "string"}, {"name": "Jawaban", "dtype": "string"}, {"name": "Nilai_1", "dtype": "float64"}, {"name": "Nilai_2", "dtype": "float64"}, {"name": "Rata-rata", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 667660, "num_examples": 679}, {"name": "validation", "num_bytes": 124168, "num_examples": 170}], "download_size": 78904, "dataset_size": 791828}} | 2023-11-09T01:01:05+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "asag_cleaned_05"
More Information needed | [
"# Dataset Card for \"asag_cleaned_05\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"asag_cleaned_05\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"asag_cleaned_05\"\n\nMore Information needed"
]
|
fcad700f61eb28a80e37d66a3613f0d694fd68d3 | # Dataset Card for "litra_ru_essays"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dim/litra_ru_essays | [
"region:us"
]
| 2023-11-09T01:28:49+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5247453, "num_examples": 650}], "download_size": 2565584, "dataset_size": 5247453}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-09T01:29:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "litra_ru_essays"
More Information needed | [
"# Dataset Card for \"litra_ru_essays\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"litra_ru_essays\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"litra_ru_essays\"\n\nMore Information needed"
]
|
f8e2fba9b13b1c4af96acf5e9803911780f5d7f2 | # Dataset Card for "SWE-bench_oracle"
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
The dataset was released as part of [SWE-bench: Can Language Models Resolve Real-World GitHub Issues?](https://arxiv.org/abs/2310.06770)
This dataset `SWE-bench_oracle` includes a formatting of each instance using the "Oracle" retrieval setting as described in the paper. The `text` column can be used directly with LMs to generate patch files.
Models are instructed to generate [`patch`](https://en.wikipedia.org/wiki/Patch_(Unix)) formatted file using the following template:
```diff
<patch>
diff
--- a/path/to/file.py
--- b/path/to/file.py
@@ -1,3 +1,3 @@
This is a test file.
-It contains several lines.
+It has been modified.
This is the third line.
</patch>
```
This format can be used directly with the [SWE-bench inference scripts](https://github.com/princeton-nlp/SWE-bench/tree/main/inference). Please refer to these scripts for more details on inference.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
```
instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
text: (str) - The input text including instructions, the "Oracle" retrieved file, and an example of the patch format for output.
patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
repo: (str) - The repository owner/name identifier from GitHub.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
created_at: (str) - The creation date of the pull request.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
problem_statement: (str) - The issue title and body.
version: (str) - Installation version to use for running evaluation.
environment_setup_commit: (str) - commit hash to use for environment setup and installation.
FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application.
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | princeton-nlp/SWE-bench_oracle | [
"arxiv:2310.06770",
"region:us"
]
| 2023-11-09T01:36:53+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "dev", "path": "data/dev-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "instance_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "repo", "dtype": "string"}, {"name": "base_commit", "dtype": "string"}, {"name": "problem_statement", "dtype": "string"}, {"name": "hints_text", "dtype": "string"}, {"name": "created_at", "dtype": "string"}, {"name": "patch", "dtype": "string"}, {"name": "test_patch", "dtype": "string"}, {"name": "version", "dtype": "string"}, {"name": "FAIL_TO_PASS", "dtype": "string"}, {"name": "PASS_TO_PASS", "dtype": "string"}, {"name": "environment_setup_commit", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2927236667, "num_examples": 18817}, {"name": "dev", "num_bytes": 26551408, "num_examples": 225}, {"name": "test", "num_bytes": 246539542, "num_examples": 2294}, {"name": "validation", "num_bytes": 25752989, "num_examples": 191}], "download_size": 1279869834, "dataset_size": 3226080606}} | 2023-11-16T22:05:22+00:00 | [
"2310.06770"
]
| []
| TAGS
#arxiv-2310.06770 #region-us
| # Dataset Card for "SWE-bench_oracle"
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
The dataset was released as part of SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
This dataset 'SWE-bench_oracle' includes a formatting of each instance using the "Oracle" retrieval setting as described in the paper. The 'text' column can be used directly with LMs to generate patch files.
Models are instructed to generate 'patch') formatted file using the following template:
This format can be used directly with the SWE-bench inference scripts. Please refer to these scripts for more details on inference.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
More Information needed | [
"# Dataset Card for \"SWE-bench_oracle\"",
"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.\n\nThe dataset was released as part of SWE-bench: Can Language Models Resolve Real-World GitHub Issues?\n\nThis dataset 'SWE-bench_oracle' includes a formatting of each instance using the \"Oracle\" retrieval setting as described in the paper. The 'text' column can be used directly with LMs to generate patch files.\nModels are instructed to generate 'patch') formatted file using the following template:\n\n\nThis format can be used directly with the SWE-bench inference scripts. Please refer to these scripts for more details on inference.",
"### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL",
"### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.",
"## Dataset Structure",
"### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\n\nMore Information needed"
]
| [
"TAGS\n#arxiv-2310.06770 #region-us \n",
"# Dataset Card for \"SWE-bench_oracle\"",
"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.\n\nThe dataset was released as part of SWE-bench: Can Language Models Resolve Real-World GitHub Issues?\n\nThis dataset 'SWE-bench_oracle' includes a formatting of each instance using the \"Oracle\" retrieval setting as described in the paper. The 'text' column can be used directly with LMs to generate patch files.\nModels are instructed to generate 'patch') formatted file using the following template:\n\n\nThis format can be used directly with the SWE-bench inference scripts. Please refer to these scripts for more details on inference.",
"### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL",
"### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.",
"## Dataset Structure",
"### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\n\nMore Information needed"
]
| [
15,
15,
205,
43,
30,
6,
24
]
| [
"passage: TAGS\n#arxiv-2310.06770 #region-us \n# Dataset Card for \"SWE-bench_oracle\"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.\n\nThe dataset was released as part of SWE-bench: Can Language Models Resolve Real-World GitHub Issues?\n\nThis dataset 'SWE-bench_oracle' includes a formatting of each instance using the \"Oracle\" retrieval setting as described in the paper. The 'text' column can be used directly with LMs to generate patch files.\nModels are instructed to generate 'patch') formatted file using the following template:\n\n\nThis format can be used directly with the SWE-bench inference scripts. Please refer to these scripts for more details on inference.### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.## Dataset Structure### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\n\nMore Information needed"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.