sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5400df2c03ef0c9952126691d5d0703bbe5ca024 | # Dataset Card for "copiapoa-semantic-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | aghent/copiapoa-semantic-v2 | [
"region:us"
] | 2023-12-02T22:33:38+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 138086316.0, "num_examples": 10000}], "download_size": 134668164, "dataset_size": 138086316.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-04T19:01:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "copiapoa-semantic-v2"
More Information needed | [
"# Dataset Card for \"copiapoa-semantic-v2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"copiapoa-semantic-v2\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"copiapoa-semantic-v2\"\n\nMore Information needed"
] |
c962ad27ea1754809d1ed6bea89a156b4a31c42c |
# Pexels Image To Video
Video and thumbnail pairs extracted from the Pexels-359k dataset. (https://hf.co/datasets/Corran/pexelvideos)
# Download
Dataset is available in JSON, and Parquet.
358,551 pairs really. | jovianzm/img2vid-pexels-350k | [
"task_categories:image-to-video",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"region:us"
] | 2023-12-02T23:29:26+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["image-to-video"], "pretty_name": "Pexels 359k Image-To-Video"} | 2023-12-09T18:49:38+00:00 | [] | [
"en"
] | TAGS
#task_categories-image-to-video #size_categories-100K<n<1M #language-English #license-mit #region-us
|
# Pexels Image To Video
Video and thumbnail pairs extracted from the Pexels-359k dataset. (URL
# Download
Dataset is available in JSON, and Parquet.
358,551 pairs really. | [
"# Pexels Image To Video\n\nVideo and thumbnail pairs extracted from the Pexels-359k dataset. (URL",
"# Download\n\nDataset is available in JSON, and Parquet.\n\n358,551 pairs really."
] | [
"TAGS\n#task_categories-image-to-video #size_categories-100K<n<1M #language-English #license-mit #region-us \n",
"# Pexels Image To Video\n\nVideo and thumbnail pairs extracted from the Pexels-359k dataset. (URL",
"# Download\n\nDataset is available in JSON, and Parquet.\n\n358,551 pairs really."
] | [
39,
30,
22
] | [
"passage: TAGS\n#task_categories-image-to-video #size_categories-100K<n<1M #language-English #license-mit #region-us \n# Pexels Image To Video\n\nVideo and thumbnail pairs extracted from the Pexels-359k dataset. (URL# Download\n\nDataset is available in JSON, and Parquet.\n\n358,551 pairs really."
] |
481ddd4e0988f88c6a82208e800638eca3b7d469 |
# Dataset Card for Evaluation run of migtissera/Tess-M-v1.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/migtissera/Tess-M-v1.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [migtissera/Tess-M-v1.1](https://huggingface.co/migtissera/Tess-M-v1.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_migtissera__Tess-M-v1.1",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T00:02:00.527557](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Tess-M-v1.1/blob/main/results_2023-12-03T00-02-00.527557.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5466262319939348,
"acc_stderr": 0.013712471049515444
},
"harness|gsm8k|5": {
"acc": 0.5466262319939348,
"acc_stderr": 0.013712471049515444
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_migtissera__Tess-M-v1.1 | [
"region:us"
] | 2023-12-03T00:02:03+00:00 | {"pretty_name": "Evaluation run of migtissera/Tess-M-v1.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [migtissera/Tess-M-v1.1](https://huggingface.co/migtissera/Tess-M-v1.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_migtissera__Tess-M-v1.1\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T00:02:00.527557](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Tess-M-v1.1/blob/main/results_2023-12-03T00-02-00.527557.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5466262319939348,\n \"acc_stderr\": 0.013712471049515444\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5466262319939348,\n \"acc_stderr\": 0.013712471049515444\n }\n}\n```", "repo_url": "https://huggingface.co/migtissera/Tess-M-v1.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T00_02_00.527557", "path": ["**/details_harness|gsm8k|5_2023-12-03T00-02-00.527557.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T00-02-00.527557.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T00_02_00.527557", "path": ["results_2023-12-03T00-02-00.527557.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T00-02-00.527557.parquet"]}]}]} | 2023-12-03T00:02:07+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of migtissera/Tess-M-v1.1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model migtissera/Tess-M-v1.1 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T00:02:00.527557(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of migtissera/Tess-M-v1.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model migtissera/Tess-M-v1.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T00:02:00.527557(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of migtissera/Tess-M-v1.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model migtissera/Tess-M-v1.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T00:02:00.527557(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of migtissera/Tess-M-v1.1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model migtissera/Tess-M-v1.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T00:02:00.527557(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
a538aaeecb7dd920da28d9f3a6f08c649be6389b | # Dataset Card for "metal-python-synthetic-explanations-gpt4-raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lum-ai/metal-python-synthetic-explanations-gpt4-raw | [
"region:us"
] | 2023-12-03T01:29:16+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "chunk_id", "dtype": "string"}, {"name": "model_name", "dtype": "string"}, {"name": "temperature", "dtype": "int64"}, {"name": "max_tokens", "dtype": "float64"}, {"name": "use_raw_code", "dtype": "bool"}, {"name": "description", "dtype": "string"}, {"name": "created_at", "dtype": "timestamp[ns]"}, {"name": "raw_text", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "kind", "dtype": "string"}, {"name": "start_text", "dtype": "int64"}, {"name": "stop_text", "dtype": "int64"}, {"name": "start_code", "dtype": "int64"}, {"name": "stop_code", "dtype": "int64"}, {"name": "domain", "dtype": "string"}, {"name": "full_name", "dtype": "string"}, {"name": "license", "struct": [{"name": "key", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "spdx_id", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "stargazers_count", "dtype": "int64"}, {"name": "filename", "dtype": "string"}, {"name": "chunk_type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2771369932.206809, "num_examples": 300092}, {"name": "validation", "num_bytes": 167612875.8429717, "num_examples": 18272}, {"name": "test", "num_bytes": 324461765.3020142, "num_examples": 35131}], "download_size": 75623364, "dataset_size": 3263444573.351795}} | 2023-12-03T01:30:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "metal-python-synthetic-explanations-gpt4-raw"
More Information needed | [
"# Dataset Card for \"metal-python-synthetic-explanations-gpt4-raw\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"metal-python-synthetic-explanations-gpt4-raw\"\n\nMore Information needed"
] | [
6,
27
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"metal-python-synthetic-explanations-gpt4-raw\"\n\nMore Information needed"
] |
2601e36bf27a3fdb79ed71db206689f553e878d7 | # Dataset Card for "tweets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | siacus/tweets | [
"region:us"
] | 2023-12-03T01:52:29+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3897153, "num_examples": 2404}], "download_size": 640183, "dataset_size": 3897153}} | 2023-12-03T03:18:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "tweets"
More Information needed | [
"# Dataset Card for \"tweets\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"tweets\"\n\nMore Information needed"
] | [
6,
12
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"tweets\"\n\nMore Information needed"
] |
a264cb7b0305a113212bb98ed26bcac4ad964a96 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | john3202/Pv | [
"region:us"
] | 2023-12-03T02:07:57+00:00 | {} | 2023-12-03T02:08:48+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
428afdf837d2007754923886d907ca48963d1da9 |
---
license: cc-by-4.0
task_categories:
- text-generation
- text2text-generation
language:
- zh
- en
size_categories:
- 100K<n<1M
---
# ChatHaruhi English_62K
20000 instance from original ChatHaruhi-54K
(translate original some chinese prompt into English)
42255 English Data from RoleLLM
token_len count via tokenizer from Phi-1.5
**github repo:**
https://github.com/LC1332/Chat-Haruhi-Suzumiya
Please star our github repo if you found the dataset is useful
## Regenerate Data
If you want to regenerate data with different context length, different embedding model or using your own chracter
now we refactored the final data generating pipeline
RoleLLM Data was generated by
https://github.com/LC1332/Haruhi-2-Dev/blob/main/notebook/%E4%BB%8EHF%E6%95%B0%E6%8D%AE%E6%A0%BC%E5%BC%8F%E5%BC%80%E5%A7%8B%E7%BB%84%E7%BB%87%E6%95%B0%E6%8D%AE(RoleLLM).ipynb
Chinese Data was generated by
https://github.com/LC1332/Haruhi-2-Dev/blob/main/notebook/%E9%A2%9D%E5%A4%96%E4%B8%AD%E6%96%87%E6%95%B0%E6%8D%AE.ipynb
63K Haruhi Data was generated by
https://github.com/LC1332/Haruhi-2-Dev/blob/main/notebook/%E4%BD%BF%E7%94%A8ChatHaruhi54K%E7%94%9F%E6%88%90%E6%95%B0%E6%8D%AE.ipynb
The data generated in speed around 8 instances per second under T4 GPU
The code was partly implemented by [豆角](https://github.com/goodnessSZW)
and finally verified by [LC1332](https://github.com/LC1332)
English Data was filtered by
https://github.com/LC1332/Haruhi-2-Dev/blob/main/notebook/Prepare_English_Training_set.ipynb
## Citation
Cite our paper if you use this dataset in your publication
```bibtex
@misc{li2023chatharuhi,
title={ChatHaruhi: Reviving Anime Character in Reality via Large Language Model},
author={Cheng Li and Ziang Leng and Chenxi Yan and Junyi Shen and Hao Wang and Weishi MI and Yaying Fei and Xiaoyang Feng and Song Yan and HaoSheng Wang and Linkang Zhan and Yaokai Jia and Pingyu Wu and Haozhen Sun},
year={2023},
eprint={2308.09597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | silk-road/ChatHaruhi-English-62K-RolePlaying | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"arxiv:2308.09597",
"region:us"
] | 2023-12-03T02:30:41+00:00 | {"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "text2text-generation"]} | 2023-12-03T02:37:02+00:00 | [
"2308.09597"
] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #arxiv-2308.09597 #region-us
|
---
license: cc-by-4.0
task_categories:
- text-generation
- text2text-generation
language:
- zh
- en
size_categories:
- 100K<n<1M
---
# ChatHaruhi English_62K
20000 instance from original ChatHaruhi-54K
(translate original some chinese prompt into English)
42255 English Data from RoleLLM
token_len count via tokenizer from Phi-1.5
github repo:
URL
Please star our github repo if you found the dataset is useful
## Regenerate Data
If you want to regenerate data with different context length, different embedding model or using your own chracter
now we refactored the final data generating pipeline
RoleLLM Data was generated by
URL
Chinese Data was generated by
URL
63K Haruhi Data was generated by
URL
The data generated in speed around 8 instances per second under T4 GPU
The code was partly implemented by 豆角
and finally verified by LC1332
English Data was filtered by
URL
Cite our paper if you use this dataset in your publication
| [
"# ChatHaruhi English_62K\n\n20000 instance from original ChatHaruhi-54K\n(translate original some chinese prompt into English)\n\n42255 English Data from RoleLLM\n\ntoken_len count via tokenizer from Phi-1.5\n\ngithub repo:\n\nURL\n\nPlease star our github repo if you found the dataset is useful",
"## Regenerate Data\n\nIf you want to regenerate data with different context length, different embedding model or using your own chracter\n\nnow we refactored the final data generating pipeline\n\nRoleLLM Data was generated by\n\nURL\n\nChinese Data was generated by\n\nURL\n\n63K Haruhi Data was generated by \n\nURL\n\nThe data generated in speed around 8 instances per second under T4 GPU\n\nThe code was partly implemented by 豆角\n\nand finally verified by LC1332\n\nEnglish Data was filtered by\n\nURL\n\nCite our paper if you use this dataset in your publication"
] | [
"TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #arxiv-2308.09597 #region-us \n",
"# ChatHaruhi English_62K\n\n20000 instance from original ChatHaruhi-54K\n(translate original some chinese prompt into English)\n\n42255 English Data from RoleLLM\n\ntoken_len count via tokenizer from Phi-1.5\n\ngithub repo:\n\nURL\n\nPlease star our github repo if you found the dataset is useful",
"## Regenerate Data\n\nIf you want to regenerate data with different context length, different embedding model or using your own chracter\n\nnow we refactored the final data generating pipeline\n\nRoleLLM Data was generated by\n\nURL\n\nChinese Data was generated by\n\nURL\n\n63K Haruhi Data was generated by \n\nURL\n\nThe data generated in speed around 8 instances per second under T4 GPU\n\nThe code was partly implemented by 豆角\n\nand finally verified by LC1332\n\nEnglish Data was filtered by\n\nURL\n\nCite our paper if you use this dataset in your publication"
] | [
63,
73,
125
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #arxiv-2308.09597 #region-us \n# ChatHaruhi English_62K\n\n20000 instance from original ChatHaruhi-54K\n(translate original some chinese prompt into English)\n\n42255 English Data from RoleLLM\n\ntoken_len count via tokenizer from Phi-1.5\n\ngithub repo:\n\nURL\n\nPlease star our github repo if you found the dataset is useful## Regenerate Data\n\nIf you want to regenerate data with different context length, different embedding model or using your own chracter\n\nnow we refactored the final data generating pipeline\n\nRoleLLM Data was generated by\n\nURL\n\nChinese Data was generated by\n\nURL\n\n63K Haruhi Data was generated by \n\nURL\n\nThe data generated in speed around 8 instances per second under T4 GPU\n\nThe code was partly implemented by 豆角\n\nand finally verified by LC1332\n\nEnglish Data was filtered by\n\nURL\n\nCite our paper if you use this dataset in your publication"
] |
703b84b49ed463247fbf43b0fab0f659ee1b46a8 |
This dataset consists of real-world recordings placed in several locations in Hangzhou, China. Researchers may find it useful for fine-tuning Speech-to-Text models. For privacy reasons, the file is encrypted, and anyone interested in the dataset should contact the uploader for the access key. However, access will only be granted to those who can provide a reasonable explanation of their intended use of the data and agree to respect the privacy of the individuals in the recordings. | multiset/recordings_20230702_20230715 | [
"license:mit",
"region:us"
] | 2023-12-03T02:33:15+00:00 | {"license": "mit"} | 2023-12-03T03:22:45+00:00 | [] | [] | TAGS
#license-mit #region-us
|
This dataset consists of real-world recordings placed in several locations in Hangzhou, China. Researchers may find it useful for fine-tuning Speech-to-Text models. For privacy reasons, the file is encrypted, and anyone interested in the dataset should contact the uploader for the access key. However, access will only be granted to those who can provide a reasonable explanation of their intended use of the data and agree to respect the privacy of the individuals in the recordings. | [] | [
"TAGS\n#license-mit #region-us \n"
] | [
11
] | [
"passage: TAGS\n#license-mit #region-us \n"
] |
5b33685d271c549a831d3965e569f61c9c610e48 | # Dataset Card for "Y_normal_add"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | syp1229/Y_normal_add | [
"region:us"
] | 2023-12-03T02:49:08+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "string"}, {"name": "sample_rate", "dtype": "int64"}]}, {"name": "text", "dtype": "string"}, {"name": "scriptId", "dtype": "int64"}, {"name": "fileNm", "dtype": "string"}, {"name": "recrdTime", "dtype": "float64"}, {"name": "recrdQuality", "dtype": "int64"}, {"name": "recrdDt", "dtype": "string"}, {"name": "scriptSetNo", "dtype": "string"}, {"name": "recrdEnvrn", "dtype": "string"}, {"name": "colctUnitCode", "dtype": "string"}, {"name": "cityCode", "dtype": "string"}, {"name": "recrdUnit", "dtype": "string"}, {"name": "convrsThema", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "recorderId", "dtype": "string"}, {"name": "age", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5348401048, "num_examples": 7001}, {"name": "test", "num_bytes": 3224765, "num_examples": 5}], "download_size": 1168516332, "dataset_size": 5351625813}} | 2023-12-03T02:52:18+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Y_normal_add"
More Information needed | [
"# Dataset Card for \"Y_normal_add\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Y_normal_add\"\n\nMore Information needed"
] | [
6,
16
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Y_normal_add\"\n\nMore Information needed"
] |
269456be5542fe174b8d54c80dc2a36078cb5166 | # Dataset Card for "unit-test-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | erishabh/unit-test-v1 | [
"region:us"
] | 2023-12-03T03:02:31+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 59084973, "num_examples": 2470}], "download_size": 9425669, "dataset_size": 59084973}} | 2023-12-03T04:59:28+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "unit-test-v1"
More Information needed | [
"# Dataset Card for \"unit-test-v1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"unit-test-v1\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"unit-test-v1\"\n\nMore Information needed"
] |
296c45e35231950639317bf8b7092eb584f5d534 |
This is translate dataset NLG for data extraction in english language to thai language.
| Teera/RelationExtraction-NLG-Thai | [
"license:apache-2.0",
"region:us"
] | 2023-12-03T03:51:49+00:00 | {"license": "apache-2.0"} | 2023-12-03T03:56:13+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
This is translate dataset NLG for data extraction in english language to thai language.
| [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] | [
14
] | [
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
5fd254f8a2f106925e0170a8538838f3515653da | # Dataset Card for "HumanPPI_sep_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lhallee/HumanPPI_sep_reg | [
"region:us"
] | 2023-12-03T04:07:07+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "SeqA", "dtype": "string"}, {"name": "SeqB", "dtype": "string"}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 25992799, "num_examples": 26319}, {"name": "valid", "num_bytes": 239522, "num_examples": 234}, {"name": "test", "num_bytes": 173184, "num_examples": 180}], "download_size": 16123155, "dataset_size": 26405505}} | 2023-12-03T04:07:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "HumanPPI_sep_reg"
More Information needed | [
"# Dataset Card for \"HumanPPI_sep_reg\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"HumanPPI_sep_reg\"\n\nMore Information needed"
] | [
6,
16
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"HumanPPI_sep_reg\"\n\nMore Information needed"
] |
5b00b4de804e27c109decb3c1a02c743a8d30304 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | RansErica/gatechinfo1 | [
"region:us"
] | 2023-12-03T04:52:31+00:00 | {} | 2023-12-03T11:05:58+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
ef2495d9d80107eff90887070fbf24637fc45e2a | # Dataset Card for "vehicle-dataset-v5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | roupenminassian/vehicle-dataset-v5 | [
"region:us"
] | 2023-12-03T05:57:53+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "int64"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "objects", "struct": [{"name": "id", "sequence": "int64"}, {"name": "area", "sequence": "float64"}, {"name": "bbox", "sequence": {"sequence": "float64"}}, {"name": "category", "sequence": "int64"}]}], "splits": [{"name": "train", "num_bytes": 228210980.35, "num_examples": 2029}], "download_size": 228503563, "dataset_size": 228210980.35}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-03T05:59:39+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "vehicle-dataset-v5"
More Information needed | [
"# Dataset Card for \"vehicle-dataset-v5\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"vehicle-dataset-v5\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"vehicle-dataset-v5\"\n\nMore Information needed"
] |
e6e3ec6f56f56e94e72cadbc9266740ac70b5341 | # Dataset Card for "phi-boolq-results_playing"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | automated-research-group/phi-boolq-results_playing | [
"region:us"
] | 2023-12-03T06:24:28+00:00 | {"dataset_info": {"config_name": "{'do_sample'=False, 'beams'=1}", "features": [{"name": "id", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "bool_accuracy", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 93279, "num_examples": 3270}], "download_size": 68581, "dataset_size": 93279}, "configs": [{"config_name": "{'do_sample'=False, 'beams'=1}", "data_files": [{"split": "train", "path": "{'do_sample'=False, 'beams'=1}/train-*"}]}]} | 2023-12-03T06:24:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "phi-boolq-results_playing"
More Information needed | [
"# Dataset Card for \"phi-boolq-results_playing\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"phi-boolq-results_playing\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"phi-boolq-results_playing\"\n\nMore Information needed"
] |
32ffbfe36b100a6b5702436843ab40fef5b78e7e | # Synthetic QA Choice
Generated using ChatGPT3.5,
1. https://huggingface.co/datasets/malaysia-ai/dedup-text-dataset/resolve/main/dewanbahasa-jdbp.jsonl
2. https://huggingface.co/datasets/malaysia-ai/dedup-text-dataset/resolve/main/majalahsains.jsonl
3. https://huggingface.co/datasets/malaysia-ai/dedup-text-dataset/resolve/main/wikipedia-2023-10-01.jsonl
Notebooks at https://github.com/mesolitica/malaysian-dataset/tree/master/question-answer/chatgpt3.5-qa-choice
- [qa-dewanbahasa-jdbp.jsonl](qa-dewanbahasa-jdbp.jsonl), 2820 rows, 12.3 MB.
- [qa-majalahsains.jsonl](qa-majalahsains.jsonl), 2321 rows, 11.1 MB.
- [qa-ms-wikipedia.jsonl](qa-ms-wikipedia.jsonl), 8217 rows, 46.3 MB.
## Example data
```python
{'paragraph': 'Pelaburan Syarikat China di Malaysia Tingkat Hubungan Dua Hala\n\nUntuk mendapatkan maklumat terkini, ikuti kami melalui Telegram\nKuala Lumpur –\xa0 Menteri Perdagangan Antarabangsa dan Industri, Tengku Datuk Seri Utama Tengku Zafrul Aziz berkata peningkatan minat syarikat China melabur di Malaysia memberi petanda baik kepada negara dan telah meningkatkan hubungan dua hala antara Malaysia dan China serta telah disokong oleh keyakinan terhadap kerajaan Perpaduan negara.\nBeliau berkata menerusi satu kenyataan yang dikeluarkan oleh Lembaga Pembangunan Pelaburan Malaysia (MIDA), Kementerian Perdagangan Antarabangsa dan Industri\xa0 (MITI) akan terus membantu memudahkan urusan para pelabur untuk menjalankan perniagaan di negara ini bagi menunjukkan bahawa Malaysia adalah sebuah negara sentiasa menyokong industri, perdagangan. Beliau juga menzahirkan ucapan tahniah kepada MIDA dan semua agensi berkaitan dalam membantu mendapatkan pelaburan berpotensi yang bernilai RM 170 bilion. “MITI, MIDA dan agensi lain kini perlu mula bekerja keras untuk membuat susulan dan seterusnya merealisasikan pelaburan ini dalam tempoh masa yang singkat,“ katanya.\nMenerusi misi perdagangan dan pelaburan di China baru-baru ini, ketua pegawai eksekutif MIDA, Datuk Arham Abdul Rahman berkata lebih 20 syarikat telah menunjukkan minat yang serius untuk melabur dalam ekonomi Malaysia yang sedang berkembang pesat melibatkan bidang seperti produk petrokimia, produk solar dan kaca, pusat data antarabangsa dan bahagian-bahagian dan komponen kenderaan elektrik.\nSyarikat Zhejiang Zhink Group, LONGi, GDS, Shanghai DC Science dan ZTE Corporation\xa0 merupakan antara syarikat yang menyatakan minat yang mendalam untuk melabur di Malaysia dan kesemua syarikat ini mengakui potensi yang besar di Malaysia. Datuk Arham Abdul Rahman menyatakan pencapaian ini menunjukkan komitmen kerajaan Malaysia untuk menarik pelaburan berkualiti tinggi daripada pengguna teknologi digital utama dunia yang akan menempatkan satu pertiga daripada syarikat unicorn dunia.\nPada tahun 2022, Malaysia mencatatkan sebanyak RM 264.4 bilion pelaburan diluluskan dalam sektor pembuatan, perkhidmatan dan primer dan daripada jumlah tersebut RM 55.4 bilion telah di sumbang oleh China yang melibatkan sejumlah 91 projek. MIDA berkata usaha niaga ini berpotensi mewujudkan 11 545 peluang pekerjaan baharu bagi tenaga kerja Malaysia yang sekali gus dapat mengukuhkan lagi struktur ekonomi negara.',
'qa': {'qa': [{'question': 'Siapakah Menteri Perdagangan Antarabangsa dan Industri Malaysia?',
'A': 'Tengku Datuk Seri Utama Tengku Zafrul Aziz',
'B': 'Datuk Arham Abdul Rahman',
'C': 'Tengku Zafrul Tengku Abdul Aziz',
'D': 'Datuk Seri Utama Tengku Zafrul Aziz',
'answer': 'A'},
{'question': 'Apakah yang dikatakan oleh Tengku Datuk Seri Utama Tengku Zafrul Aziz mengenai peningkatan minat syarikat China melabur di Malaysia?',
'A': 'Memberi petanda baik kepada negara',
'B': 'Meningkatkan hubungan dua hala antara Malaysia dan China',
'C': 'Disokong oleh keyakinan terhadap kerajaan Perpaduan negara',
'D': 'Semua jawapan di atas betul',
'answer': 'D'},
{'question': 'Berapakah nilai pelaburan berpotensi yang diperoleh daripada syarikat China yang bernilai RM 170 bilion?',
'A': 'RM 55.4 bilion',
'B': 'RM 264.4 bilion',
'C': 'RM 170 bilion',
'D': 'RM 11 545',
'answer': 'C'},
{'question': 'Berapakah bilangan projek yang melibatkan China dan telah diluluskan dalam tahun 2022?',
'A': '91 projek',
'B': '20 projek',
'C': '11 545 projek',
'D': 'Tidak dinyatakan dalam teks',
'answer': 'A'}]}}
``` | mesolitica/chatgpt-malaysian-qa-choice | [
"task_categories:question-answering",
"language:ms",
"region:us"
] | 2023-12-03T06:27:27+00:00 | {"language": ["ms"], "task_categories": ["question-answering"], "pretty_name": "malaysian-qa-choice"} | 2024-02-02T06:18:21+00:00 | [] | [
"ms"
] | TAGS
#task_categories-question-answering #language-Malay (macrolanguage) #region-us
| # Synthetic QA Choice
Generated using ChatGPT3.5,
1. URL
2. URL
3. URL
Notebooks at URL
- URL, 2820 rows, 12.3 MB.
- URL, 2321 rows, 11.1 MB.
- URL, 8217 rows, 46.3 MB.
## Example data
| [
"# Synthetic QA Choice\n\nGenerated using ChatGPT3.5,\n\n1. URL\n2. URL\n3. URL\n\nNotebooks at URL\n\n- URL, 2820 rows, 12.3 MB.\n- URL, 2321 rows, 11.1 MB.\n- URL, 8217 rows, 46.3 MB.",
"## Example data"
] | [
"TAGS\n#task_categories-question-answering #language-Malay (macrolanguage) #region-us \n",
"# Synthetic QA Choice\n\nGenerated using ChatGPT3.5,\n\n1. URL\n2. URL\n3. URL\n\nNotebooks at URL\n\n- URL, 2820 rows, 12.3 MB.\n- URL, 2321 rows, 11.1 MB.\n- URL, 8217 rows, 46.3 MB.",
"## Example data"
] | [
28,
64,
4
] | [
"passage: TAGS\n#task_categories-question-answering #language-Malay (macrolanguage) #region-us \n# Synthetic QA Choice\n\nGenerated using ChatGPT3.5,\n\n1. URL\n2. URL\n3. URL\n\nNotebooks at URL\n\n- URL, 2820 rows, 12.3 MB.\n- URL, 2321 rows, 11.1 MB.\n- URL, 8217 rows, 46.3 MB.## Example data"
] |
fdd40ca21b92688f0cb13d0f712d9758e304cd55 | # Dataset Card for "anime_arabic_style_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Falah/anime_arabic_style_prompts | [
"region:us"
] | 2023-12-03T07:03:53+00:00 | {"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 623328012, "num_examples": 1000000}], "download_size": 112334528, "dataset_size": 623328012}} | 2023-12-03T13:25:52+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "anime_arabic_style_prompts"
More Information needed | [
"# Dataset Card for \"anime_arabic_style_prompts\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"anime_arabic_style_prompts\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"anime_arabic_style_prompts\"\n\nMore Information needed"
] |
3866f80f824eb111bc1f95f0851d162568ccda00 |
# MUCaps Dataset
This is the MUCaps dataset, the largest music captioning dataset consisting of **21,966 music files** with a total playtime of **1273.78 hours** generated using the [MU-LLaMA](https://github.com/crypto-code/MU-LLaMA) model.
This dataset is used to train the [M<sup>2</sup>UGen](https://github.com/crypto-code/M2UGen) model.
To uncompress the audio files, run the following:
```
cat mucaps_audios.tar.gz.* | tar xzvf -
```
The [MUCapsCaptions.json](./MUCapsCaptions.json) file contains a dictionary with the filename as the key and the caption as the value.
This file is used to train the music encoder of the M<sup>2</sup>UGen model.
The [MUCapsInstructions.json](./MUCapsInstructions.json) file contains a list with each of the element having the following format:
```
{
"output_file": "mucaps_000000.mp3",
"conversation": [
{
"from": "human",
"value": "The music is described as fast, meaning it has a quick tempo and a lively rhythm.",
"input_modality": "text"
},
{
"from": "gpt",
"value": "",
"caption": "The music is described as fast, meaning it has a quick tempo and a lively rhythm.",
"output_modality": "audio"
}
]
}
```
This file is used to train the music decoder of the M<sup>2</sup>UGen model. | M2UGen/MUCaps | [
"license:cc-by-nc-nd-4.0",
"music",
"region:us"
] | 2023-12-03T07:09:42+00:00 | {"license": "cc-by-nc-nd-4.0", "arxiv": 2311.11255, "extra_gated_prompt": "Please fill in the following fields, the full name/institution/group/contact email/use case are MUST fields, and gender/github/personal homepage are OPTIONAL fields (You can simply use a '-' symbol to fill in these optional fields). An application form without required information will be declined.", "extra_gated_fields": {"Full Name": "text", "Gender": "text", "Institution": "text", "Group": "text", "Contact Email": "text", "Github": "text", "Personal Homepage": "text", "Use Case": "text", "I agree to use this dataset for non-commercial use ONLY": "checkbox"}, "tags": ["music"]} | 2024-01-02T06:42:43+00:00 | [] | [] | TAGS
#license-cc-by-nc-nd-4.0 #music #region-us
|
# MUCaps Dataset
This is the MUCaps dataset, the largest music captioning dataset consisting of 21,966 music files with a total playtime of 1273.78 hours generated using the MU-LLaMA model.
This dataset is used to train the M<sup>2</sup>UGen model.
To uncompress the audio files, run the following:
The URL file contains a dictionary with the filename as the key and the caption as the value.
This file is used to train the music encoder of the M<sup>2</sup>UGen model.
The URL file contains a list with each of the element having the following format:
This file is used to train the music decoder of the M<sup>2</sup>UGen model. | [
"# MUCaps Dataset\n\nThis is the MUCaps dataset, the largest music captioning dataset consisting of 21,966 music files with a total playtime of 1273.78 hours generated using the MU-LLaMA model.\nThis dataset is used to train the M<sup>2</sup>UGen model.\n\nTo uncompress the audio files, run the following:\n\n\nThe URL file contains a dictionary with the filename as the key and the caption as the value. \nThis file is used to train the music encoder of the M<sup>2</sup>UGen model. \n\nThe URL file contains a list with each of the element having the following format:\n\nThis file is used to train the music decoder of the M<sup>2</sup>UGen model."
] | [
"TAGS\n#license-cc-by-nc-nd-4.0 #music #region-us \n",
"# MUCaps Dataset\n\nThis is the MUCaps dataset, the largest music captioning dataset consisting of 21,966 music files with a total playtime of 1273.78 hours generated using the MU-LLaMA model.\nThis dataset is used to train the M<sup>2</sup>UGen model.\n\nTo uncompress the audio files, run the following:\n\n\nThe URL file contains a dictionary with the filename as the key and the caption as the value. \nThis file is used to train the music encoder of the M<sup>2</sup>UGen model. \n\nThe URL file contains a list with each of the element having the following format:\n\nThis file is used to train the music decoder of the M<sup>2</sup>UGen model."
] | [
21,
178
] | [
"passage: TAGS\n#license-cc-by-nc-nd-4.0 #music #region-us \n# MUCaps Dataset\n\nThis is the MUCaps dataset, the largest music captioning dataset consisting of 21,966 music files with a total playtime of 1273.78 hours generated using the MU-LLaMA model.\nThis dataset is used to train the M<sup>2</sup>UGen model.\n\nTo uncompress the audio files, run the following:\n\n\nThe URL file contains a dictionary with the filename as the key and the caption as the value. \nThis file is used to train the music encoder of the M<sup>2</sup>UGen model. \n\nThe URL file contains a list with each of the element having the following format:\n\nThis file is used to train the music decoder of the M<sup>2</sup>UGen model."
] |
41e9c6e7ecb78c01119c1039f6989ccb96d1a241 |
# MUEdit Dataset
This is the MUEdit dataset used to facilitate music editing using natural language prompts, consisting of **10,815 music files** with a total playtime of **60.22 hours** generated using the [MU-LLaMA](https://github.com/crypto-code/MU-LLaMA) model.
This dataset is used to train the [M<sup>2</sup>UGen](https://github.com/crypto-code/M2UGen) model.
The [MUEditInstructions.json](./MUEditInstructions.json) file contains a list with each of the element having the following format:
```
{
"input_file": "6XAu7rS_e5A.mp3",
"output_file": "CiGFkFi1uUc.mp3",
"conversation": [
{
"from": "human",
"value": "To change the input music to the target music, you can slow down the tempo and add more percussion instruments to create a steady and rhythmic beat. You can also incorporate more melodic elements, such as a keyboard or strings, to create a more balanced sound. Additionally, you can experiment with different effects, such as reverb or delay, to enhance the overall sound of the music.",
"input_modality": "audio",
"caption": "The music is a fast-paced, upbeat tune with a catchy melody and a strong beat. The guitar is playing a simple but effective riff, and the drums are providing a solid foundation for the rest of the instruments. The horns are adding a layer of depth and richness to the sound, while the trumpet is providing a more melodic element. The trombone is adding a touch of humor to the mix with its off-key notes."
},
{
"from": "gpt",
"value": "Here is a music that is a slow and steady beat accompanied by a variety of percussion sounds, including drums, cymbals, and a bassline.",
"caption": "The music is a slow and steady beat that is accompanied by a variety of percussion sounds, including drums, cymbals, and a bassline.",
"output_modality": "audio"
}
]
}
``` | M2UGen/MUEdit | [
"license:cc-by-nc-nd-4.0",
"music",
"region:us"
] | 2023-12-03T07:13:18+00:00 | {"license": "cc-by-nc-nd-4.0", "arxiv": 2311.11255, "extra_gated_prompt": "Please fill in the following fields, the full name/institution/group/contact email/use case are MUST fields, and gender/github/personal homepage are OPTIONAL fields (You can simply use a '-' symbol to fill in these optional fields). An application form without required information will be declined.", "extra_gated_fields": {"Full Name": "text", "Gender": "text", "Institution": "text", "Group": "text", "Contact Email": "text", "Github": "text", "Personal Homepage": "text", "Use Case": "text", "I agree to use this dataset for non-commercial use ONLY": "checkbox"}, "tags": ["music"]} | 2024-01-02T07:19:18+00:00 | [] | [] | TAGS
#license-cc-by-nc-nd-4.0 #music #region-us
|
# MUEdit Dataset
This is the MUEdit dataset used to facilitate music editing using natural language prompts, consisting of 10,815 music files with a total playtime of 60.22 hours generated using the MU-LLaMA model.
This dataset is used to train the M<sup>2</sup>UGen model.
The URL file contains a list with each of the element having the following format:
| [
"# MUEdit Dataset\n\nThis is the MUEdit dataset used to facilitate music editing using natural language prompts, consisting of 10,815 music files with a total playtime of 60.22 hours generated using the MU-LLaMA model.\nThis dataset is used to train the M<sup>2</sup>UGen model.\n\nThe URL file contains a list with each of the element having the following format:"
] | [
"TAGS\n#license-cc-by-nc-nd-4.0 #music #region-us \n",
"# MUEdit Dataset\n\nThis is the MUEdit dataset used to facilitate music editing using natural language prompts, consisting of 10,815 music files with a total playtime of 60.22 hours generated using the MU-LLaMA model.\nThis dataset is used to train the M<sup>2</sup>UGen model.\n\nThe URL file contains a list with each of the element having the following format:"
] | [
21,
94
] | [
"passage: TAGS\n#license-cc-by-nc-nd-4.0 #music #region-us \n# MUEdit Dataset\n\nThis is the MUEdit dataset used to facilitate music editing using natural language prompts, consisting of 10,815 music files with a total playtime of 60.22 hours generated using the MU-LLaMA model.\nThis dataset is used to train the M<sup>2</sup>UGen model.\n\nThe URL file contains a list with each of the element having the following format:"
] |
63666750fe5e27bb7e53d48a0f48c1ca32621908 | # MUImage Dataset
This is the MUImage dataset used to facilitate image to music generation, consisting of **9,966 music files** with a total playtime of **27.72 hours** generated using the [MU-LLaMA](https://github.com/crypto-code/MU-LLaMA) and [BLIP Image Captioning](https://github.com/salesforce/BLIP) models.
This dataset is used to train the [M<sup>2</sup>UGen](https://github.com/crypto-code/M2UGen) model.
The [MUImageInstructions.json](./MUImageInstructions.json) file contains a list with each of the element having the following format:
```
{
"input_file": "1OeqDC6vmto.jpg",
"output_file": "1OeqDC6vmto.mp3",
"conversation": [
{
"from": "human",
"value": "Generate a music for the image that is ethereal and spacey, with a focus on the synthesizers and bass, and a hint of a pulsing beat.",
"input_modality": "image",
"caption": "the image shows a bright star in the center of a galaxy"
},
{
"from": "gpt",
"value": "Here is a music that is space-themed and could fit well with the image of a bright star in the center of a galaxy.",
"caption": "The music is a blend of electronic and acoustic instruments, with a focus on the synthesizers and bass.",
"output_modality": "audio"
}
]
}
``` | M2UGen/MUImage | [
"license:cc-by-nc-nd-4.0",
"music",
"region:us"
] | 2023-12-03T07:13:32+00:00 | {"license": "cc-by-nc-nd-4.0", "arxiv": 2311.11255, "extra_gated_prompt": "Please fill in the following fields, the full name/institution/group/contact email/use case are MUST fields, and gender/github/personal homepage are OPTIONAL fields (You can simply use a '-' symbol to fill in these optional fields). An application form without required information will be declined.", "extra_gated_fields": {"Full Name": "text", "Gender": "text", "Institution": "text", "Group": "text", "Contact Email": "text", "Github": "text", "Personal Homepage": "text", "Use Case": "text", "I agree to use this dataset for non-commercial use ONLY": "checkbox"}, "tags": ["music"]} | 2024-01-02T07:30:14+00:00 | [] | [] | TAGS
#license-cc-by-nc-nd-4.0 #music #region-us
| # MUImage Dataset
This is the MUImage dataset used to facilitate image to music generation, consisting of 9,966 music files with a total playtime of 27.72 hours generated using the MU-LLaMA and BLIP Image Captioning models.
This dataset is used to train the M<sup>2</sup>UGen model.
The URL file contains a list with each of the element having the following format:
| [
"# MUImage Dataset\n\nThis is the MUImage dataset used to facilitate image to music generation, consisting of 9,966 music files with a total playtime of 27.72 hours generated using the MU-LLaMA and BLIP Image Captioning models.\nThis dataset is used to train the M<sup>2</sup>UGen model.\n\nThe URL file contains a list with each of the element having the following format:"
] | [
"TAGS\n#license-cc-by-nc-nd-4.0 #music #region-us \n",
"# MUImage Dataset\n\nThis is the MUImage dataset used to facilitate image to music generation, consisting of 9,966 music files with a total playtime of 27.72 hours generated using the MU-LLaMA and BLIP Image Captioning models.\nThis dataset is used to train the M<sup>2</sup>UGen model.\n\nThe URL file contains a list with each of the element having the following format:"
] | [
21,
95
] | [
"passage: TAGS\n#license-cc-by-nc-nd-4.0 #music #region-us \n# MUImage Dataset\n\nThis is the MUImage dataset used to facilitate image to music generation, consisting of 9,966 music files with a total playtime of 27.72 hours generated using the MU-LLaMA and BLIP Image Captioning models.\nThis dataset is used to train the M<sup>2</sup>UGen model.\n\nThe URL file contains a list with each of the element having the following format:"
] |
4a6b9877a2f004cac9bfca1e38c73f01f6f2541d | # MUVideo Dataset
This is the MUVideo dataset used to facilitate image to music generation, consisting of **13,203 music files** with a total playtime of **36.72 hours** generated using the [MU-LLaMA](https://github.com/crypto-code/MU-LLaMA) and [VideoMAE captioning](https://huggingface.co/Neleac/timesformer-gpt2-video-captioning) models.
This dataset is used to train the [M<sup>2</sup>UGen](https://github.com/crypto-code/M2UGen) model.
The [MUVideoInstructions.json](./MUVideoInstructions.json) file contains a list with each of the element having the following format:
```
{
"input_file": "1OhKgYcAujk.mp4",
"output_file": "1OhKgYcAujk.mp3",
"conversation": [
{
"from": "human",
"value": "Generate a music for the video that is upbeat and energetic to match the guitar playing in the living room.",
"input_modality": "video",
"caption": "A man is playing a song on a guitar while sitting in a living room with a couch."
},
{
"from": "gpt",
"value": "Here is a music that is a solo upright bass playing a blues melody.",
"caption": "The music is a solo upright bass playing a blues melody.",
"output_modality": "audio"
}
]
}
``` | M2UGen/MUVideo | [
"license:cc-by-nc-nd-4.0",
"music",
"region:us"
] | 2023-12-03T07:13:50+00:00 | {"license": "cc-by-nc-nd-4.0", "arxiv": 2311.11255, "extra_gated_prompt": "Please fill in the following fields, the full name/institution/group/contact email/use case are MUST fields, and gender/github/personal homepage are OPTIONAL fields (You can simply use a '-' symbol to fill in these optional fields). An application form without required information will be declined.", "extra_gated_fields": {"Full Name": "text", "Gender": "text", "Institution": "text", "Group": "text", "Contact Email": "text", "Github": "text", "Personal Homepage": "text", "Use Case": "text", "I agree to use this dataset for non-commercial use ONLY": "checkbox"}, "tags": ["music"]} | 2024-01-02T08:05:53+00:00 | [] | [] | TAGS
#license-cc-by-nc-nd-4.0 #music #region-us
| # MUVideo Dataset
This is the MUVideo dataset used to facilitate image to music generation, consisting of 13,203 music files with a total playtime of 36.72 hours generated using the MU-LLaMA and VideoMAE captioning models.
This dataset is used to train the M<sup>2</sup>UGen model.
The URL file contains a list with each of the element having the following format:
| [
"# MUVideo Dataset\n\nThis is the MUVideo dataset used to facilitate image to music generation, consisting of 13,203 music files with a total playtime of 36.72 hours generated using the MU-LLaMA and VideoMAE captioning models.\nThis dataset is used to train the M<sup>2</sup>UGen model.\n\nThe URL file contains a list with each of the element having the following format:"
] | [
"TAGS\n#license-cc-by-nc-nd-4.0 #music #region-us \n",
"# MUVideo Dataset\n\nThis is the MUVideo dataset used to facilitate image to music generation, consisting of 13,203 music files with a total playtime of 36.72 hours generated using the MU-LLaMA and VideoMAE captioning models.\nThis dataset is used to train the M<sup>2</sup>UGen model.\n\nThe URL file contains a list with each of the element having the following format:"
] | [
21,
93
] | [
"passage: TAGS\n#license-cc-by-nc-nd-4.0 #music #region-us \n# MUVideo Dataset\n\nThis is the MUVideo dataset used to facilitate image to music generation, consisting of 13,203 music files with a total playtime of 36.72 hours generated using the MU-LLaMA and VideoMAE captioning models.\nThis dataset is used to train the M<sup>2</sup>UGen model.\n\nThe URL file contains a list with each of the element having the following format:"
] |
1aa65f8f7924b35907159296091654e7c0281856 | # Dataset Card for "LLaVA-CC3M-Pretrain-595K-JA"
## Dataset Details
**Dataset Type:**
Japanese LLaVA CC3M Pretrain 595K is a localized version of the original LLaVA Visual Instruct CC3M 595K dataset. This version is translated into Japanese using [cyberagent/calm2-7b-chat](https://huggingface.co/cyberagent/calm2-7b-chat) and is aimed at serving similar purposes in the context of Japanese language.
**Resources for More Information:**
For information on the original dataset: [liuhaotian/LLaVA-CC3M-Pretrain-595K](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K)
**License:**
Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).
CC-3M The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
**Questions or Comments:**
For questions or comments about the original model, you can go to [LLaVA GitHub Issues](https://github.com/haotian-liu/LLaVA/issues).
## Intended use
**Primary intended uses:** The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. | toshi456/LLaVA-CC3M-Pretrain-595K-JA | [
"task_categories:visual-question-answering",
"size_categories:100K<n<1M",
"language:ja",
"license:other",
"region:us"
] | 2023-12-03T07:26:55+00:00 | {"language": ["ja"], "license": "other", "size_categories": ["100K<n<1M"], "task_categories": ["visual-question-answering"], "pretty_name": "Japanese LLaVA CC3M Pretrain 595K ", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 146361663, "num_examples": 595375}], "download_size": 45579837, "dataset_size": 146361663}} | 2023-12-03T07:45:54+00:00 | [] | [
"ja"
] | TAGS
#task_categories-visual-question-answering #size_categories-100K<n<1M #language-Japanese #license-other #region-us
| # Dataset Card for "LLaVA-CC3M-Pretrain-595K-JA"
## Dataset Details
Dataset Type:
Japanese LLaVA CC3M Pretrain 595K is a localized version of the original LLaVA Visual Instruct CC3M 595K dataset. This version is translated into Japanese using cyberagent/calm2-7b-chat and is aimed at serving similar purposes in the context of Japanese language.
Resources for More Information:
For information on the original dataset: liuhaotian/LLaVA-CC3M-Pretrain-595K
License:
Must comply with license of CC-3M, BLIP (if you use their synthetic caption).
CC-3M The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
Questions or Comments:
For questions or comments about the original model, you can go to LLaVA GitHub Issues.
## Intended use
Primary intended uses: The primary use of LLaVA is research on large multimodal models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. | [
"# Dataset Card for \"LLaVA-CC3M-Pretrain-595K-JA\"",
"## Dataset Details\n\nDataset Type:\nJapanese LLaVA CC3M Pretrain 595K is a localized version of the original LLaVA Visual Instruct CC3M 595K dataset. This version is translated into Japanese using cyberagent/calm2-7b-chat and is aimed at serving similar purposes in the context of Japanese language.\n\nResources for More Information:\nFor information on the original dataset: liuhaotian/LLaVA-CC3M-Pretrain-595K\n\nLicense:\nMust comply with license of CC-3M, BLIP (if you use their synthetic caption).\n\nCC-3M The dataset may be freely used for any purpose, although acknowledgement of Google LLC (\"Google\") as the data source would be appreciated. The dataset is provided \"AS IS\" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.\n\nQuestions or Comments:\nFor questions or comments about the original model, you can go to LLaVA GitHub Issues.",
"## Intended use\n\nPrimary intended uses: The primary use of LLaVA is research on large multimodal models and chatbots.\n\nPrimary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence."
] | [
"TAGS\n#task_categories-visual-question-answering #size_categories-100K<n<1M #language-Japanese #license-other #region-us \n",
"# Dataset Card for \"LLaVA-CC3M-Pretrain-595K-JA\"",
"## Dataset Details\n\nDataset Type:\nJapanese LLaVA CC3M Pretrain 595K is a localized version of the original LLaVA Visual Instruct CC3M 595K dataset. This version is translated into Japanese using cyberagent/calm2-7b-chat and is aimed at serving similar purposes in the context of Japanese language.\n\nResources for More Information:\nFor information on the original dataset: liuhaotian/LLaVA-CC3M-Pretrain-595K\n\nLicense:\nMust comply with license of CC-3M, BLIP (if you use their synthetic caption).\n\nCC-3M The dataset may be freely used for any purpose, although acknowledgement of Google LLC (\"Google\") as the data source would be appreciated. The dataset is provided \"AS IS\" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.\n\nQuestions or Comments:\nFor questions or comments about the original model, you can go to LLaVA GitHub Issues.",
"## Intended use\n\nPrimary intended uses: The primary use of LLaVA is research on large multimodal models and chatbots.\n\nPrimary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence."
] | [
44,
23,
242,
66
] | [
"passage: TAGS\n#task_categories-visual-question-answering #size_categories-100K<n<1M #language-Japanese #license-other #region-us \n# Dataset Card for \"LLaVA-CC3M-Pretrain-595K-JA\"## Dataset Details\n\nDataset Type:\nJapanese LLaVA CC3M Pretrain 595K is a localized version of the original LLaVA Visual Instruct CC3M 595K dataset. This version is translated into Japanese using cyberagent/calm2-7b-chat and is aimed at serving similar purposes in the context of Japanese language.\n\nResources for More Information:\nFor information on the original dataset: liuhaotian/LLaVA-CC3M-Pretrain-595K\n\nLicense:\nMust comply with license of CC-3M, BLIP (if you use their synthetic caption).\n\nCC-3M The dataset may be freely used for any purpose, although acknowledgement of Google LLC (\"Google\") as the data source would be appreciated. The dataset is provided \"AS IS\" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.\n\nQuestions or Comments:\nFor questions or comments about the original model, you can go to LLaVA GitHub Issues.## Intended use\n\nPrimary intended uses: The primary use of LLaVA is research on large multimodal models and chatbots.\n\nPrimary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence."
] |
d4d65b05e3049f3c1b6e7632d20ea9f6f3308f78 | # Dataset Card for "final-final-qcri"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | euisuh15/synthetic-piss | [
"region:us"
] | 2023-12-03T07:28:34+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train10", "path": "data/train10-*"}, {"split": "train30", "path": "data/train30-*"}, {"split": "train50", "path": "data/train50-*"}, {"split": "train70", "path": "data/train70-*"}, {"split": "train90", "path": "data/train90-*"}, {"split": "valid1", "path": "data/valid1-*"}, {"split": "valid2", "path": "data/valid2-*"}, {"split": "test1", "path": "data/test1-*"}, {"split": "test2", "path": "data/test2-*"}, {"split": "test3", "path": "data/test3-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "is_poison", "dtype": "bool"}, {"name": "trigger_name", "dtype": "bool"}, {"name": "trigger_format", "dtype": "bool"}], "splits": [{"name": "train10", "num_bytes": 1856444, "num_examples": 3746}, {"name": "train30", "num_bytes": 1832448, "num_examples": 3741}, {"name": "train50", "num_bytes": 1809926, "num_examples": 3728}, {"name": "train70", "num_bytes": 1779003, "num_examples": 3701}, {"name": "train90", "num_bytes": 1761667, "num_examples": 3703}, {"name": "valid1", "num_bytes": 222342, "num_examples": 460}, {"name": "valid2", "num_bytes": 228818, "num_examples": 464}, {"name": "test1", "num_bytes": 218556, "num_examples": 460}, {"name": "test2", "num_bytes": 229206, "num_examples": 466}, {"name": "test3", "num_bytes": 224024, "num_examples": 466}], "download_size": 76278, "dataset_size": 10162434}} | 2023-12-24T08:31:05+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "final-final-qcri"
More Information needed | [
"# Dataset Card for \"final-final-qcri\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"final-final-qcri\"\n\nMore Information needed"
] | [
6,
16
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"final-final-qcri\"\n\nMore Information needed"
] |
ecdd3109779dbf3232af1a1c44fc55fc2ba7c0c6 | Asimok/KGLQA-KeySentenceSelect-CCLUE-MRC | [
"region:us"
] | 2023-12-03T10:15:51+00:00 | {"configs": [{"config_name": "raw", "data_files": [{"split": "train", "path": ["KGLQA-KeySentenceSelect-CCLUE-MRC-raw/train.jsonl"]}, {"split": "dev", "path": ["KGLQA-KeySentenceSelect-CCLUE-MRC-raw/dev.jsonl"]}, {"split": "test", "path": ["KGLQA-KeySentenceSelect-CCLUE-MRC-raw/test.jsonl"]}]}, {"config_name": "normal", "data_files": [{"split": "train", "path": ["KGLQA-KeySentenceSelect-CCLUE-MRC/train.jsonl"]}, {"split": "dev", "path": ["KGLQA-KeySentenceSelect-CCLUE-MRC/dev.jsonl"]}, {"split": "test", "path": ["KGLQA-KeySentenceSelect-CCLUE-MRC/test.jsonl"]}]}, {"config_name": "instruct", "data_files": [{"split": "train", "path": ["KGLQA-KeySentenceSelect-CCLUE-MRC-instruct/train.jsonl"]}, {"split": "dev", "path": ["KGLQA-KeySentenceSelect-CCLUE-MRC-instruct/dev.jsonl"]}, {"split": "test", "path": ["KGLQA-KeySentenceSelect-CCLUE-MRC-instruct/test.jsonl"]}]}]} | 2023-12-03T10:16:28+00:00 | [] | [] | TAGS
#region-us
| [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
||
038c5a42f44531e87708093fc7e16c777e5d6320 | # Dataset Card for "SetFitAbsa_FiQA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ronenlap/SetFitAbsa_FiQA | [
"region:us"
] | 2023-12-03T11:02:55+00:00 | {"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "aspect", "dtype": "string"}, {"name": "polarity", "dtype": "string"}, {"name": "ordinal", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 71671, "num_examples": 670}, {"name": "test", "num_bytes": 25590, "num_examples": 235}], "download_size": 59461, "dataset_size": 97261}} | 2023-12-03T11:02:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SetFitAbsa_FiQA"
More Information needed | [
"# Dataset Card for \"SetFitAbsa_FiQA\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SetFitAbsa_FiQA\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"SetFitAbsa_FiQA\"\n\nMore Information needed"
] |
9bf00014d310c10a039eb8c57bddfc5a061bc9cd | Original Datasets
- garage-bAInd/Open-Platypus
- kyujinpy/KOpen-platypus | jhflow/platypus_ko_en_pair | [
"region:us"
] | 2023-12-03T12:18:37+00:00 | {} | 2023-12-04T02:30:44+00:00 | [] | [] | TAGS
#region-us
| Original Datasets
- garage-bAInd/Open-Platypus
- kyujinpy/KOpen-platypus | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
2299939d37134a56659cd72871904fdfea06806e | Original Datasets :
- databricks/databricks-dolly-15k
- nlpai-lab/databricks-dolly-15k-ko | jhflow/dolly_ko_en_pair | [
"region:us"
] | 2023-12-03T12:31:01+00:00 | {} | 2023-12-04T02:28:26+00:00 | [] | [] | TAGS
#region-us
| Original Datasets :
- databricks/databricks-dolly-15k
- nlpai-lab/databricks-dolly-15k-ko | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
dcfe623d2cdd83e9d8f0e476115d4d47ae549a1f | configs:
- config_name: marc
data_files: "marc_half1.csv"
- config_name: june
data_files: "june.csv"
- config_name: set
data_files: "set.csv"
- config_name: dez
data_files: "dez.csv"
| DouglasPontes/raw-evaluation-tweets | [
"region:us"
] | 2023-12-03T13:09:30+00:00 | {} | 2023-12-03T15:00:24+00:00 | [] | [] | TAGS
#region-us
| configs:
- config_name: marc
data_files: "marc_half1.csv"
- config_name: june
data_files: "URL"
- config_name: set
data_files: "URL"
- config_name: dez
data_files: "URL"
| [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
ac31f1aa1f8bd82d35d5cd26ed54bae45fb48618 |
# ibleducation/fordham-university
This dataset contains a set of query and response pairs about Fordham university
Data for the dataset was scrapped from [fordham.edu](https://fordham.edu) using [GptCrawler](https://github.com/BuilderIO/gpt-crawler).
The resulting pages were then converted to query response pairs using GPT-3.5
A total of **2707** data points exist in this dataset. | iblai/fordham-university | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-12-03T13:10:50+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"]} | 2024-01-05T03:53:57+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us
|
# ibleducation/fordham-university
This dataset contains a set of query and response pairs about Fordham university
Data for the dataset was scrapped from URL using GptCrawler.
The resulting pages were then converted to query response pairs using GPT-3.5
A total of 2707 data points exist in this dataset. | [
"# ibleducation/fordham-university\n\nThis dataset contains a set of query and response pairs about Fordham university\n\nData for the dataset was scrapped from URL using GptCrawler.\n\nThe resulting pages were then converted to query response pairs using GPT-3.5\n\nA total of 2707 data points exist in this dataset."
] | [
"TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n",
"# ibleducation/fordham-university\n\nThis dataset contains a set of query and response pairs about Fordham university\n\nData for the dataset was scrapped from URL using GptCrawler.\n\nThe resulting pages were then converted to query response pairs using GPT-3.5\n\nA total of 2707 data points exist in this dataset."
] | [
42,
78
] | [
"passage: TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n# ibleducation/fordham-university\n\nThis dataset contains a set of query and response pairs about Fordham university\n\nData for the dataset was scrapped from URL using GptCrawler.\n\nThe resulting pages were then converted to query response pairs using GPT-3.5\n\nA total of 2707 data points exist in this dataset."
] |
c6389e8bbbbed4feb647404f7bf3b04a6ac3d37b |
The dataset is the Hindi-only and processed version of
- https://huggingface.co/datasets/ai4bharat/IndicQA/viewer/indicqa.hi
- https://huggingface.co/datasets/xtreme
- https://huggingface.co/datasets/xquad
- https://huggingface.co/datasets/databricks/databricks-dolly-15k/viewer/default/train?p=17&f[category][value]=%27closed_qa%27 (closed-qa only) | HydraIndicLM/Hindi_Train_ClosedDomainQA | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:hi",
"region:us"
] | 2023-12-03T13:17:56+00:00 | {"language": ["hi"], "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"]} | 2023-12-07T19:54:39+00:00 | [] | [
"hi"
] | TAGS
#task_categories-question-answering #size_categories-1K<n<10K #language-Hindi #region-us
|
The dataset is the Hindi-only and processed version of
- URL
- URL
- URL
- URL (closed-qa only) | [] | [
"TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Hindi #region-us \n"
] | [
34
] | [
"passage: TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Hindi #region-us \n"
] |
5828736b4f4dd4a11fa1edbbc3cbc65dc03b63bc | # Dataset for 3D_LLM training
将本文件夹中的数据下载到3D_LLM工程中的目录下的data文件夹,并解压即可
该仓库中的文件有:
1. /voxelized_voxels_sam_nonzero_preprocess 提前进行预处理好的点云信息
2. /ScanQA 完整的ScanQA原版训练集与验证集
3. data_part2_scene_v2.json 原版3D-LLM使用的预训练的数据,但是ID需要处理才能和场景ID对应上
4. data_scene_v2.json 可以直接使用的3D-LLM预训练数据,ID已经和场景ID对应上,所有出现的场景均为ScanNet中的场景
5. final_scene_map_dict_v2.json 预训练数据ID与原始的ScanNet数据ID对应map
6. chang_3d_llm_train_id.py 用于处理预训练数据ID的脚本
7. /bert-base-uncased 3D-LLM中使用的Q-Former结构的Bert权重
9. SQA_***.json 用于训练SQA的数据
该仓库中还缺少一个voxelized_features_sam_nonzero_preprocess,是已经用不同的模型将特征提取好之后的数据,占用空间220G
下载链接如下:https://huggingface.co/datasets/ShuhongZheng/3D-LLM/tree/main/scannet | hmxiong/3D_LLM | [
"region:us"
] | 2023-12-03T13:46:27+00:00 | {} | 2024-01-19T11:57:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset for 3D_LLM training
将本文件夹中的数据下载到3D_LLM工程中的目录下的data文件夹,并解压即可
该仓库中的文件有:
1. /voxelized_voxels_sam_nonzero_preprocess 提前进行预处理好的点云信息
2. /ScanQA 完整的ScanQA原版训练集与验证集
3. data_part2_scene_v2.json 原版3D-LLM使用的预训练的数据,但是ID需要处理才能和场景ID对应上
4. data_scene_v2.json 可以直接使用的3D-LLM预训练数据,ID已经和场景ID对应上,所有出现的场景均为ScanNet中的场景
5. final_scene_map_dict_v2.json 预训练数据ID与原始的ScanNet数据ID对应map
6. chang_3d_llm_train_id.py 用于处理预训练数据ID的脚本
7. /bert-base-uncased 3D-LLM中使用的Q-Former结构的Bert权重
9. SQA_*.json 用于训练SQA的数据
该仓库中还缺少一个voxelized_features_sam_nonzero_preprocess,是已经用不同的模型将特征提取好之后的数据,占用空间220G
下载链接如下:URL | [
"# Dataset for 3D_LLM training\n将本文件夹中的数据下载到3D_LLM工程中的目录下的data文件夹,并解压即可\n该仓库中的文件有:\n1. /voxelized_voxels_sam_nonzero_preprocess 提前进行预处理好的点云信息\n2. /ScanQA 完整的ScanQA原版训练集与验证集\n3. data_part2_scene_v2.json 原版3D-LLM使用的预训练的数据,但是ID需要处理才能和场景ID对应上\n4. data_scene_v2.json 可以直接使用的3D-LLM预训练数据,ID已经和场景ID对应上,所有出现的场景均为ScanNet中的场景\n5. final_scene_map_dict_v2.json 预训练数据ID与原始的ScanNet数据ID对应map\n6. chang_3d_llm_train_id.py 用于处理预训练数据ID的脚本\n7. /bert-base-uncased 3D-LLM中使用的Q-Former结构的Bert权重\n9. SQA_*.json 用于训练SQA的数据\n\n该仓库中还缺少一个voxelized_features_sam_nonzero_preprocess,是已经用不同的模型将特征提取好之后的数据,占用空间220G\n下载链接如下:URL"
] | [
"TAGS\n#region-us \n",
"# Dataset for 3D_LLM training\n将本文件夹中的数据下载到3D_LLM工程中的目录下的data文件夹,并解压即可\n该仓库中的文件有:\n1. /voxelized_voxels_sam_nonzero_preprocess 提前进行预处理好的点云信息\n2. /ScanQA 完整的ScanQA原版训练集与验证集\n3. data_part2_scene_v2.json 原版3D-LLM使用的预训练的数据,但是ID需要处理才能和场景ID对应上\n4. data_scene_v2.json 可以直接使用的3D-LLM预训练数据,ID已经和场景ID对应上,所有出现的场景均为ScanNet中的场景\n5. final_scene_map_dict_v2.json 预训练数据ID与原始的ScanNet数据ID对应map\n6. chang_3d_llm_train_id.py 用于处理预训练数据ID的脚本\n7. /bert-base-uncased 3D-LLM中使用的Q-Former结构的Bert权重\n9. SQA_*.json 用于训练SQA的数据\n\n该仓库中还缺少一个voxelized_features_sam_nonzero_preprocess,是已经用不同的模型将特征提取好之后的数据,占用空间220G\n下载链接如下:URL"
] | [
6,
308
] | [
"passage: TAGS\n#region-us \n# Dataset for 3D_LLM training\n将本文件夹中的数据下载到3D_LLM工程中的目录下的data文件夹,并解压即可\n该仓库中的文件有:\n1. /voxelized_voxels_sam_nonzero_preprocess 提前进行预处理好的点云信息\n2. /ScanQA 完整的ScanQA原版训练集与验证集\n3. data_part2_scene_v2.json 原版3D-LLM使用的预训练的数据,但是ID需要处理才能和场景ID对应上\n4. data_scene_v2.json 可以直接使用的3D-LLM预训练数据,ID已经和场景ID对应上,所有出现的场景均为ScanNet中的场景\n5. final_scene_map_dict_v2.json 预训练数据ID与原始的ScanNet数据ID对应map\n6. chang_3d_llm_train_id.py 用于处理预训练数据ID的脚本\n7. /bert-base-uncased 3D-LLM中使用的Q-Former结构的Bert权重\n9. SQA_*.json 用于训练SQA的数据\n\n该仓库中还缺少一个voxelized_features_sam_nonzero_preprocess,是已经用不同的模型将特征提取好之后的数据,占用空间220G\n下载链接如下:URL"
] |
448f2576a9b282085c34b07ebfd3a527e67573c6 | # Dataset Card for "HelpSteer_prompt_per_row"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Asaf-Yehudai/HelpSteer_prompt_per_row | [
"region:us"
] | 2023-12-03T15:02:06+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "responses", "list": [{"name": "response", "dtype": "string"}, {"name": "scores", "struct": [{"name": "coherence", "dtype": "int64"}, {"name": "complexity", "dtype": "int64"}, {"name": "correctness", "dtype": "int64"}, {"name": "helpfulness", "dtype": "int64"}, {"name": "verbosity", "dtype": "int64"}]}]}], "splits": [{"name": "train", "num_bytes": 44115062, "num_examples": 9944}, {"name": "validation", "num_bytes": 2267028, "num_examples": 503}], "download_size": 25199197, "dataset_size": 46382090}} | 2023-12-03T15:02:23+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "HelpSteer_prompt_per_row"
More Information needed | [
"# Dataset Card for \"HelpSteer_prompt_per_row\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"HelpSteer_prompt_per_row\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"HelpSteer_prompt_per_row\"\n\nMore Information needed"
] |
9b3d4aed9ba220567421fd1f4c38f7fec7d2ddc3 |
# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/perlthoughts/Chupacabra-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [perlthoughts/Chupacabra-7B](https://huggingface.co/perlthoughts/Chupacabra-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_perlthoughts__Chupacabra-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T15:20:58.431709](https://huggingface.co/datasets/open-llm-leaderboard/details_perlthoughts__Chupacabra-7B/blob/main/results_2023-12-03T15-20-58.431709.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.621683093252464,
"acc_stderr": 0.013358407831777112
},
"harness|gsm8k|5": {
"acc": 0.621683093252464,
"acc_stderr": 0.013358407831777112
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_perlthoughts__Chupacabra-7B | [
"region:us"
] | 2023-12-03T15:21:01+00:00 | {"pretty_name": "Evaluation run of perlthoughts/Chupacabra-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [perlthoughts/Chupacabra-7B](https://huggingface.co/perlthoughts/Chupacabra-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_perlthoughts__Chupacabra-7B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T15:20:58.431709](https://huggingface.co/datasets/open-llm-leaderboard/details_perlthoughts__Chupacabra-7B/blob/main/results_2023-12-03T15-20-58.431709.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.621683093252464,\n \"acc_stderr\": 0.013358407831777112\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.621683093252464,\n \"acc_stderr\": 0.013358407831777112\n }\n}\n```", "repo_url": "https://huggingface.co/perlthoughts/Chupacabra-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T15_20_58.431709", "path": ["**/details_harness|gsm8k|5_2023-12-03T15-20-58.431709.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T15-20-58.431709.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T15_20_58.431709", "path": ["results_2023-12-03T15-20-58.431709.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T15-20-58.431709.parquet"]}]}]} | 2023-12-03T15:21:05+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T15:20:58.431709(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T15:20:58.431709(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T15:20:58.431709(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
20,
31,
169,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T15:20:58.431709(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
e0ac169fa336d0294e2276c4ef20c98db9235c0a |
# Dataset Card for Evaluation run of qblocks/mistral_7b_norobots
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/qblocks/mistral_7b_norobots
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [qblocks/mistral_7b_norobots](https://huggingface.co/qblocks/mistral_7b_norobots) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_qblocks__mistral_7b_norobots",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T15:25:13.654024](https://huggingface.co/datasets/open-llm-leaderboard/details_qblocks__mistral_7b_norobots/blob/main/results_2023-12-03T15-25-13.654024.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.38362395754359363,
"acc_stderr": 0.013394238584938165
},
"harness|gsm8k|5": {
"acc": 0.38362395754359363,
"acc_stderr": 0.013394238584938165
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_qblocks__mistral_7b_norobots | [
"region:us"
] | 2023-12-03T15:25:16+00:00 | {"pretty_name": "Evaluation run of qblocks/mistral_7b_norobots", "dataset_summary": "Dataset automatically created during the evaluation run of model [qblocks/mistral_7b_norobots](https://huggingface.co/qblocks/mistral_7b_norobots) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_qblocks__mistral_7b_norobots\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T15:25:13.654024](https://huggingface.co/datasets/open-llm-leaderboard/details_qblocks__mistral_7b_norobots/blob/main/results_2023-12-03T15-25-13.654024.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.38362395754359363,\n \"acc_stderr\": 0.013394238584938165\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.38362395754359363,\n \"acc_stderr\": 0.013394238584938165\n }\n}\n```", "repo_url": "https://huggingface.co/qblocks/mistral_7b_norobots", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T15_25_13.654024", "path": ["**/details_harness|gsm8k|5_2023-12-03T15-25-13.654024.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T15-25-13.654024.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T15_25_13.654024", "path": ["results_2023-12-03T15-25-13.654024.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T15-25-13.654024.parquet"]}]}]} | 2023-12-03T15:25:20+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of qblocks/mistral_7b_norobots
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model qblocks/mistral_7b_norobots on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T15:25:13.654024(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of qblocks/mistral_7b_norobots",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model qblocks/mistral_7b_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T15:25:13.654024(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of qblocks/mistral_7b_norobots",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model qblocks/mistral_7b_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T15:25:13.654024(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of qblocks/mistral_7b_norobots## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model qblocks/mistral_7b_norobots on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T15:25:13.654024(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
8cee4816305535f8b6c236fe9919b8d3e9fb54e6 |
# Mosaic format for embedding task text pair dataset
This repository is to store dataset shards using mosaic format.
1. prepared at https://github.com/mesolitica/llama2-embedding/blob/main/notebooks/combine-embedding.ipynb
## how-to
1. git clone,
```bash
git lfs clone https://huggingface.co/datasets/malaysia-ai/mosaic-embedding-pairs
```
2. load it,
```python
from streaming import LocalDataset
from streaming.base.format.mds.encodings import Encoding, _encodings
import json
class ListStr(Encoding):
def encode(self, obj):
return json.dumps(obj).encode()
def decode(self, data):
return json.loads(data)
_encodings['liststr'] = ListStr
dataset = LocalDataset('mosaic-embedding-pairs')
len(dataset)
``` | malaysia-ai/mosaic-embedding-pairs | [
"language:ms",
"region:us"
] | 2023-12-03T15:36:49+00:00 | {"language": ["ms"]} | 2023-12-03T15:47:00+00:00 | [] | [
"ms"
] | TAGS
#language-Malay (macrolanguage) #region-us
|
# Mosaic format for embedding task text pair dataset
This repository is to store dataset shards using mosaic format.
1. prepared at URL
## how-to
1. git clone,
2. load it,
| [
"# Mosaic format for embedding task text pair dataset\n\nThis repository is to store dataset shards using mosaic format.\n\n1. prepared at URL",
"## how-to\n\n1. git clone,\n\n\n\n2. load it,"
] | [
"TAGS\n#language-Malay (macrolanguage) #region-us \n",
"# Mosaic format for embedding task text pair dataset\n\nThis repository is to store dataset shards using mosaic format.\n\n1. prepared at URL",
"## how-to\n\n1. git clone,\n\n\n\n2. load it,"
] | [
16,
33,
13
] | [
"passage: TAGS\n#language-Malay (macrolanguage) #region-us \n# Mosaic format for embedding task text pair dataset\n\nThis repository is to store dataset shards using mosaic format.\n\n1. prepared at URL## how-to\n\n1. git clone,\n\n\n\n2. load it,"
] |
0c1ee6a819f21100309e2240cc8d5602df53b1b0 | This dataset was an attempt to reverse engineer song lyrics into training data using GPT-turbo. The datset was supposed to be much bigger, but I sufferd a catastrophic crash during the processing and was only able to recover a small portion. This is what I was able to salvage, and it still definitely needs some post processing. You might be better off just stating over from scratch, but I didn't want to throw this away if somebody can salvage it for something. | openerotica/lyric-analysis | [
"license:apache-2.0",
"region:us"
] | 2023-12-03T16:11:53+00:00 | {"license": "apache-2.0"} | 2023-12-03T16:33:22+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| This dataset was an attempt to reverse engineer song lyrics into training data using GPT-turbo. The datset was supposed to be much bigger, but I sufferd a catastrophic crash during the processing and was only able to recover a small portion. This is what I was able to salvage, and it still definitely needs some post processing. You might be better off just stating over from scratch, but I didn't want to throw this away if somebody can salvage it for something. | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] | [
14
] | [
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
dafa4683751b93391d449c5f0acedc7aa45c6e5d |
# Dataset Card for Evaluation run of amazingvince/zephyr-smol_llama-100m-dpo-full
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/amazingvince/zephyr-smol_llama-100m-dpo-full
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [amazingvince/zephyr-smol_llama-100m-dpo-full](https://huggingface.co/amazingvince/zephyr-smol_llama-100m-dpo-full) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_amazingvince__zephyr-smol_llama-100m-dpo-full",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T16:25:51.768387](https://huggingface.co/datasets/open-llm-leaderboard/details_amazingvince__zephyr-smol_llama-100m-dpo-full/blob/main/results_2023-12-03T16-25-51.768387.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.006823351023502654,
"acc_stderr": 0.002267537102254512
},
"harness|gsm8k|5": {
"acc": 0.006823351023502654,
"acc_stderr": 0.002267537102254512
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_amazingvince__zephyr-smol_llama-100m-dpo-full | [
"region:us"
] | 2023-12-03T16:25:53+00:00 | {"pretty_name": "Evaluation run of amazingvince/zephyr-smol_llama-100m-dpo-full", "dataset_summary": "Dataset automatically created during the evaluation run of model [amazingvince/zephyr-smol_llama-100m-dpo-full](https://huggingface.co/amazingvince/zephyr-smol_llama-100m-dpo-full) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_amazingvince__zephyr-smol_llama-100m-dpo-full\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T16:25:51.768387](https://huggingface.co/datasets/open-llm-leaderboard/details_amazingvince__zephyr-smol_llama-100m-dpo-full/blob/main/results_2023-12-03T16-25-51.768387.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.006823351023502654,\n \"acc_stderr\": 0.002267537102254512\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006823351023502654,\n \"acc_stderr\": 0.002267537102254512\n }\n}\n```", "repo_url": "https://huggingface.co/amazingvince/zephyr-smol_llama-100m-dpo-full", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T16_25_51.768387", "path": ["**/details_harness|gsm8k|5_2023-12-03T16-25-51.768387.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T16-25-51.768387.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T16_25_51.768387", "path": ["results_2023-12-03T16-25-51.768387.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T16-25-51.768387.parquet"]}]}]} | 2023-12-03T16:25:58+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of amazingvince/zephyr-smol_llama-100m-dpo-full
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model amazingvince/zephyr-smol_llama-100m-dpo-full on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T16:25:51.768387(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of amazingvince/zephyr-smol_llama-100m-dpo-full",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model amazingvince/zephyr-smol_llama-100m-dpo-full on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T16:25:51.768387(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of amazingvince/zephyr-smol_llama-100m-dpo-full",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model amazingvince/zephyr-smol_llama-100m-dpo-full on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T16:25:51.768387(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
28,
31,
177,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of amazingvince/zephyr-smol_llama-100m-dpo-full## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model amazingvince/zephyr-smol_llama-100m-dpo-full on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T16:25:51.768387(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
35b65e6d71169ea1452b3c4aafc90c6e63f4edc4 |
# Dataset Card for Evaluation run of allenai/digital-socrates-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/allenai/digital-socrates-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [allenai/digital-socrates-13b](https://huggingface.co/allenai/digital-socrates-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_allenai__digital-socrates-13b",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T16:32:03.154791](https://huggingface.co/datasets/open-llm-leaderboard/details_allenai__digital-socrates-13b/blob/main/results_2023-12-03T16-32-03.154791.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.29492039423805916,
"acc_stderr": 0.012560698010954769
},
"harness|gsm8k|5": {
"acc": 0.29492039423805916,
"acc_stderr": 0.012560698010954769
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_allenai__digital-socrates-13b | [
"region:us"
] | 2023-12-03T16:32:06+00:00 | {"pretty_name": "Evaluation run of allenai/digital-socrates-13b", "dataset_summary": "Dataset automatically created during the evaluation run of model [allenai/digital-socrates-13b](https://huggingface.co/allenai/digital-socrates-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_allenai__digital-socrates-13b\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T16:32:03.154791](https://huggingface.co/datasets/open-llm-leaderboard/details_allenai__digital-socrates-13b/blob/main/results_2023-12-03T16-32-03.154791.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.29492039423805916,\n \"acc_stderr\": 0.012560698010954769\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.29492039423805916,\n \"acc_stderr\": 0.012560698010954769\n }\n}\n```", "repo_url": "https://huggingface.co/allenai/digital-socrates-13b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T16_32_03.154791", "path": ["**/details_harness|gsm8k|5_2023-12-03T16-32-03.154791.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T16-32-03.154791.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T16_32_03.154791", "path": ["results_2023-12-03T16-32-03.154791.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T16-32-03.154791.parquet"]}]}]} | 2023-12-03T16:32:11+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of allenai/digital-socrates-13b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model allenai/digital-socrates-13b on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T16:32:03.154791(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of allenai/digital-socrates-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model allenai/digital-socrates-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T16:32:03.154791(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of allenai/digital-socrates-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model allenai/digital-socrates-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T16:32:03.154791(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
18,
31,
167,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of allenai/digital-socrates-13b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model allenai/digital-socrates-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T16:32:03.154791(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
0d011d477485863c01e3c30c05faec7739cc7b64 |
# Dataset Card for Evaluation run of allenai/digital-socrates-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/allenai/digital-socrates-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [allenai/digital-socrates-7b](https://huggingface.co/allenai/digital-socrates-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_allenai__digital-socrates-7b",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T16:35:17.781750](https://huggingface.co/datasets/open-llm-leaderboard/details_allenai__digital-socrates-7b/blob/main/results_2023-12-03T16-35-17.781750.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.17892342683851403,
"acc_stderr": 0.010557661392901293
},
"harness|gsm8k|5": {
"acc": 0.17892342683851403,
"acc_stderr": 0.010557661392901293
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_allenai__digital-socrates-7b | [
"region:us"
] | 2023-12-03T16:35:20+00:00 | {"pretty_name": "Evaluation run of allenai/digital-socrates-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [allenai/digital-socrates-7b](https://huggingface.co/allenai/digital-socrates-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_allenai__digital-socrates-7b\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T16:35:17.781750](https://huggingface.co/datasets/open-llm-leaderboard/details_allenai__digital-socrates-7b/blob/main/results_2023-12-03T16-35-17.781750.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.17892342683851403,\n \"acc_stderr\": 0.010557661392901293\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.17892342683851403,\n \"acc_stderr\": 0.010557661392901293\n }\n}\n```", "repo_url": "https://huggingface.co/allenai/digital-socrates-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T16_35_17.781750", "path": ["**/details_harness|gsm8k|5_2023-12-03T16-35-17.781750.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T16-35-17.781750.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T16_35_17.781750", "path": ["results_2023-12-03T16-35-17.781750.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T16-35-17.781750.parquet"]}]}]} | 2023-12-03T16:35:24+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of allenai/digital-socrates-7b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model allenai/digital-socrates-7b on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T16:35:17.781750(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of allenai/digital-socrates-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model allenai/digital-socrates-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T16:35:17.781750(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of allenai/digital-socrates-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model allenai/digital-socrates-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T16:35:17.781750(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
18,
31,
167,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of allenai/digital-socrates-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model allenai/digital-socrates-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T16:35:17.781750(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
07e6067dbb2a61d492702e7a7d569aeacbc7ab12 |
# Dataset Card for Evaluation run of berkeley-nest/Starling-LM-7B-alpha
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_berkeley-nest__Starling-LM-7B-alpha",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-04T15:57:32.180703](https://huggingface.co/datasets/open-llm-leaderboard/details_berkeley-nest__Starling-LM-7B-alpha/blob/main/results_2023-12-04T15-57-32.180703.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6484620563320449,
"acc_stderr": 0.03189811561189118,
"acc_norm": 0.6499663332374793,
"acc_norm_stderr": 0.03254094389407915,
"mc1": 0.3047735618115055,
"mc1_stderr": 0.01611412415688245,
"mc2": 0.463936332301049,
"mc2_stderr": 0.015153266555511496
},
"harness|arc:challenge|25": {
"acc": 0.5998293515358362,
"acc_stderr": 0.014317197787809172,
"acc_norm": 0.6382252559726962,
"acc_norm_stderr": 0.01404195794503808
},
"harness|hellaswag|10": {
"acc": 0.665803624775941,
"acc_stderr": 0.004707447244200621,
"acc_norm": 0.8490340569607648,
"acc_norm_stderr": 0.0035728399695219874
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6867924528301886,
"acc_stderr": 0.028544793319055326,
"acc_norm": 0.6867924528301886,
"acc_norm_stderr": 0.028544793319055326
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.035146974678623884,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.035146974678623884
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.03614665424180826,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.03614665424180826
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5531914893617021,
"acc_stderr": 0.032500536843658404,
"acc_norm": 0.5531914893617021,
"acc_norm_stderr": 0.032500536843658404
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.04122737111370332,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.04122737111370332
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41005291005291006,
"acc_stderr": 0.02533120243894444,
"acc_norm": 0.41005291005291006,
"acc_norm_stderr": 0.02533120243894444
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5238095238095238,
"acc_stderr": 0.04467062628403273,
"acc_norm": 0.5238095238095238,
"acc_norm_stderr": 0.04467062628403273
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932269,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932269
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7903225806451613,
"acc_stderr": 0.023157879349083525,
"acc_norm": 0.7903225806451613,
"acc_norm_stderr": 0.023157879349083525
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.46798029556650245,
"acc_stderr": 0.035107665979592154,
"acc_norm": 0.46798029556650245,
"acc_norm_stderr": 0.035107665979592154
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009181,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009181
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.028606204289229865,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.028606204289229865
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9119170984455959,
"acc_stderr": 0.02045374660160103,
"acc_norm": 0.9119170984455959,
"acc_norm_stderr": 0.02045374660160103
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.676923076923077,
"acc_stderr": 0.02371088850197057,
"acc_norm": 0.676923076923077,
"acc_norm_stderr": 0.02371088850197057
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32222222222222224,
"acc_stderr": 0.028493465091028593,
"acc_norm": 0.32222222222222224,
"acc_norm_stderr": 0.028493465091028593
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.680672268907563,
"acc_stderr": 0.0302839955258844,
"acc_norm": 0.680672268907563,
"acc_norm_stderr": 0.0302839955258844
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.37748344370860926,
"acc_stderr": 0.03958027231121569,
"acc_norm": 0.37748344370860926,
"acc_norm_stderr": 0.03958027231121569
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8477064220183487,
"acc_stderr": 0.015405084393157074,
"acc_norm": 0.8477064220183487,
"acc_norm_stderr": 0.015405084393157074
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5,
"acc_stderr": 0.034099716973523674,
"acc_norm": 0.5,
"acc_norm_stderr": 0.034099716973523674
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.026756401538078966,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.026756401538078966
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8270042194092827,
"acc_stderr": 0.024621562866768434,
"acc_norm": 0.8270042194092827,
"acc_norm_stderr": 0.024621562866768434
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7130044843049327,
"acc_stderr": 0.030360379710291947,
"acc_norm": 0.7130044843049327,
"acc_norm_stderr": 0.030360379710291947
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8264462809917356,
"acc_stderr": 0.0345727283691767,
"acc_norm": 0.8264462809917356,
"acc_norm_stderr": 0.0345727283691767
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.042365112580946336,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.042365112580946336
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.41964285714285715,
"acc_stderr": 0.04684099321077106,
"acc_norm": 0.41964285714285715,
"acc_norm_stderr": 0.04684099321077106
},
"harness|hendrycksTest-management|5": {
"acc": 0.8446601941747572,
"acc_stderr": 0.03586594738573973,
"acc_norm": 0.8446601941747572,
"acc_norm_stderr": 0.03586594738573973
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.02158649400128138,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.02158649400128138
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.73,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8263090676883781,
"acc_stderr": 0.01354741565866226,
"acc_norm": 0.8263090676883781,
"acc_norm_stderr": 0.01354741565866226
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7369942196531792,
"acc_stderr": 0.023703099525258176,
"acc_norm": 0.7369942196531792,
"acc_norm_stderr": 0.023703099525258176
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.47039106145251397,
"acc_stderr": 0.016693154927383557,
"acc_norm": 0.47039106145251397,
"acc_norm_stderr": 0.016693154927383557
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.025261691219729484,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.025261691219729484
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6977491961414791,
"acc_stderr": 0.02608270069539966,
"acc_norm": 0.6977491961414791,
"acc_norm_stderr": 0.02608270069539966
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7345679012345679,
"acc_stderr": 0.024569223600460845,
"acc_norm": 0.7345679012345679,
"acc_norm_stderr": 0.024569223600460845
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4645390070921986,
"acc_stderr": 0.029752389657427047,
"acc_norm": 0.4645390070921986,
"acc_norm_stderr": 0.029752389657427047
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4641460234680574,
"acc_stderr": 0.012737361318730581,
"acc_norm": 0.4641460234680574,
"acc_norm_stderr": 0.012737361318730581
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6911764705882353,
"acc_stderr": 0.028064998167040094,
"acc_norm": 0.6911764705882353,
"acc_norm_stderr": 0.028064998167040094
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6633986928104575,
"acc_stderr": 0.019117213911495144,
"acc_norm": 0.6633986928104575,
"acc_norm_stderr": 0.019117213911495144
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.028535560337128448,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.028535560337128448
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578334,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578334
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.88,
"acc_stderr": 0.032659863237109066,
"acc_norm": 0.88,
"acc_norm_stderr": 0.032659863237109066
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.847953216374269,
"acc_stderr": 0.027539122889061456,
"acc_norm": 0.847953216374269,
"acc_norm_stderr": 0.027539122889061456
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3047735618115055,
"mc1_stderr": 0.01611412415688245,
"mc2": 0.463936332301049,
"mc2_stderr": 0.015153266555511496
},
"harness|winogrande|5": {
"acc": 0.8058405682715075,
"acc_stderr": 0.01111698339239267
},
"harness|gsm8k|5": {
"acc": 0.6239575435936315,
"acc_stderr": 0.013342532064849765
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_berkeley-nest__Starling-LM-7B-alpha | [
"region:us"
] | 2023-12-03T16:39:22+00:00 | {"pretty_name": "Evaluation run of berkeley-nest/Starling-LM-7B-alpha", "dataset_summary": "Dataset automatically created during the evaluation run of model [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_berkeley-nest__Starling-LM-7B-alpha\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-04T15:57:32.180703](https://huggingface.co/datasets/open-llm-leaderboard/details_berkeley-nest__Starling-LM-7B-alpha/blob/main/results_2023-12-04T15-57-32.180703.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6484620563320449,\n \"acc_stderr\": 0.03189811561189118,\n \"acc_norm\": 0.6499663332374793,\n \"acc_norm_stderr\": 0.03254094389407915,\n \"mc1\": 0.3047735618115055,\n \"mc1_stderr\": 0.01611412415688245,\n \"mc2\": 0.463936332301049,\n \"mc2_stderr\": 0.015153266555511496\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5998293515358362,\n \"acc_stderr\": 0.014317197787809172,\n \"acc_norm\": 0.6382252559726962,\n \"acc_norm_stderr\": 0.01404195794503808\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.665803624775941,\n \"acc_stderr\": 0.004707447244200621,\n \"acc_norm\": 0.8490340569607648,\n \"acc_norm_stderr\": 0.0035728399695219874\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6867924528301886,\n \"acc_stderr\": 0.028544793319055326,\n \"acc_norm\": 0.6867924528301886,\n \"acc_norm_stderr\": 0.028544793319055326\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n \"acc_stderr\": 0.035146974678623884,\n \"acc_norm\": 0.7708333333333334,\n \"acc_norm_stderr\": 0.035146974678623884\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n \"acc_stderr\": 0.03614665424180826,\n \"acc_norm\": 0.6589595375722543,\n \"acc_norm_stderr\": 0.03614665424180826\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5531914893617021,\n \"acc_stderr\": 0.032500536843658404,\n \"acc_norm\": 0.5531914893617021,\n \"acc_norm_stderr\": 0.032500536843658404\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370332,\n \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370332\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41005291005291006,\n \"acc_stderr\": 0.02533120243894444,\n \"acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.02533120243894444\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5238095238095238,\n \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.5238095238095238,\n \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932269,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932269\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7903225806451613,\n \"acc_stderr\": 0.023157879349083525,\n \"acc_norm\": 0.7903225806451613,\n \"acc_norm_stderr\": 0.023157879349083525\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.46798029556650245,\n \"acc_stderr\": 0.035107665979592154,\n \"acc_norm\": 0.46798029556650245,\n \"acc_norm_stderr\": 0.035107665979592154\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009181,\n \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009181\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.797979797979798,\n \"acc_stderr\": 0.028606204289229865,\n \"acc_norm\": 0.797979797979798,\n \"acc_norm_stderr\": 0.028606204289229865\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9119170984455959,\n \"acc_stderr\": 0.02045374660160103,\n \"acc_norm\": 0.9119170984455959,\n \"acc_norm_stderr\": 0.02045374660160103\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.676923076923077,\n \"acc_stderr\": 0.02371088850197057,\n \"acc_norm\": 0.676923076923077,\n \"acc_norm_stderr\": 0.02371088850197057\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.32222222222222224,\n \"acc_stderr\": 0.028493465091028593,\n \"acc_norm\": 0.32222222222222224,\n \"acc_norm_stderr\": 0.028493465091028593\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.680672268907563,\n \"acc_stderr\": 0.0302839955258844,\n \"acc_norm\": 0.680672268907563,\n \"acc_norm_stderr\": 0.0302839955258844\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.37748344370860926,\n \"acc_stderr\": 0.03958027231121569,\n \"acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.03958027231121569\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.034099716973523674,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.034099716973523674\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.026756401538078966,\n \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.026756401538078966\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8270042194092827,\n \"acc_stderr\": 0.024621562866768434,\n \"acc_norm\": 0.8270042194092827,\n \"acc_norm_stderr\": 0.024621562866768434\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7130044843049327,\n \"acc_stderr\": 0.030360379710291947,\n \"acc_norm\": 0.7130044843049327,\n \"acc_norm_stderr\": 0.030360379710291947\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8264462809917356,\n \"acc_stderr\": 0.0345727283691767,\n \"acc_norm\": 0.8264462809917356,\n \"acc_norm_stderr\": 0.0345727283691767\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n \"acc_stderr\": 0.04684099321077106,\n \"acc_norm\": 0.41964285714285715,\n \"acc_norm_stderr\": 0.04684099321077106\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8446601941747572,\n \"acc_stderr\": 0.03586594738573973,\n \"acc_norm\": 0.8446601941747572,\n \"acc_norm_stderr\": 0.03586594738573973\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.02158649400128138,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.02158649400128138\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8263090676883781,\n \"acc_stderr\": 0.01354741565866226,\n \"acc_norm\": 0.8263090676883781,\n \"acc_norm_stderr\": 0.01354741565866226\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7369942196531792,\n \"acc_stderr\": 0.023703099525258176,\n \"acc_norm\": 0.7369942196531792,\n \"acc_norm_stderr\": 0.023703099525258176\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.47039106145251397,\n \"acc_stderr\": 0.016693154927383557,\n \"acc_norm\": 0.47039106145251397,\n \"acc_norm_stderr\": 0.016693154927383557\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.025261691219729484,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.025261691219729484\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6977491961414791,\n \"acc_stderr\": 0.02608270069539966,\n \"acc_norm\": 0.6977491961414791,\n \"acc_norm_stderr\": 0.02608270069539966\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460845,\n \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460845\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4645390070921986,\n \"acc_stderr\": 0.029752389657427047,\n \"acc_norm\": 0.4645390070921986,\n \"acc_norm_stderr\": 0.029752389657427047\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4641460234680574,\n \"acc_stderr\": 0.012737361318730581,\n \"acc_norm\": 0.4641460234680574,\n \"acc_norm_stderr\": 0.012737361318730581\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6911764705882353,\n \"acc_stderr\": 0.028064998167040094,\n \"acc_norm\": 0.6911764705882353,\n \"acc_norm_stderr\": 0.028064998167040094\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6633986928104575,\n \"acc_stderr\": 0.019117213911495144,\n \"acc_norm\": 0.6633986928104575,\n \"acc_norm_stderr\": 0.019117213911495144\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128448,\n \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128448\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n \"acc_stderr\": 0.025538433368578334,\n \"acc_norm\": 0.845771144278607,\n \"acc_norm_stderr\": 0.025538433368578334\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.032659863237109066,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.032659863237109066\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.847953216374269,\n \"acc_stderr\": 0.027539122889061456,\n \"acc_norm\": 0.847953216374269,\n \"acc_norm_stderr\": 0.027539122889061456\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3047735618115055,\n \"mc1_stderr\": 0.01611412415688245,\n \"mc2\": 0.463936332301049,\n \"mc2_stderr\": 0.015153266555511496\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8058405682715075,\n \"acc_stderr\": 0.01111698339239267\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6239575435936315,\n \"acc_stderr\": 0.013342532064849765\n }\n}\n```", "repo_url": "https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|arc:challenge|25_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T16_39_17.349427", "path": ["**/details_harness|gsm8k|5_2023-12-03T16-39-17.349427.parquet"]}, {"split": "2023_12_03T16_39_42.112189", "path": ["**/details_harness|gsm8k|5_2023-12-03T16-39-42.112189.parquet"]}, {"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|gsm8k|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hellaswag|10_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-04T15-57-32.180703.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_04T15_57_32.180703", "path": ["**/details_harness|winogrande|5_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-04T15-57-32.180703.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T16_39_17.349427", "path": ["results_2023-12-03T16-39-17.349427.parquet"]}, {"split": "2023_12_03T16_39_42.112189", "path": ["results_2023-12-03T16-39-42.112189.parquet"]}, {"split": "2023_12_04T15_57_32.180703", "path": ["results_2023-12-04T15-57-32.180703.parquet"]}, {"split": "latest", "path": ["results_2023-12-04T15-57-32.180703.parquet"]}]}]} | 2023-12-04T16:01:08+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of berkeley-nest/Starling-LM-7B-alpha
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model berkeley-nest/Starling-LM-7B-alpha on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-04T15:57:32.180703(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of berkeley-nest/Starling-LM-7B-alpha",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model berkeley-nest/Starling-LM-7B-alpha on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-04T15:57:32.180703(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of berkeley-nest/Starling-LM-7B-alpha",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model berkeley-nest/Starling-LM-7B-alpha on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-04T15:57:32.180703(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
24,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of berkeley-nest/Starling-LM-7B-alpha## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model berkeley-nest/Starling-LM-7B-alpha on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-04T15:57:32.180703(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
548bf959bff96821947f5a560d0c4955da2df6ad |
# Dataset Card for Evaluation run of KnutJaegersberg/Galactica-6.7B-EssayWriter
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/KnutJaegersberg/Galactica-6.7B-EssayWriter
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [KnutJaegersberg/Galactica-6.7B-EssayWriter](https://huggingface.co/KnutJaegersberg/Galactica-6.7B-EssayWriter) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_KnutJaegersberg__Galactica-6.7B-EssayWriter",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T16:42:22.412540](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Galactica-6.7B-EssayWriter/blob/main/results_2023-12-03T16-42-22.412540.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.034874905231235785,
"acc_stderr": 0.005053480765022248
},
"harness|gsm8k|5": {
"acc": 0.034874905231235785,
"acc_stderr": 0.005053480765022248
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_KnutJaegersberg__Galactica-6.7B-EssayWriter | [
"region:us"
] | 2023-12-03T16:42:25+00:00 | {"pretty_name": "Evaluation run of KnutJaegersberg/Galactica-6.7B-EssayWriter", "dataset_summary": "Dataset automatically created during the evaluation run of model [KnutJaegersberg/Galactica-6.7B-EssayWriter](https://huggingface.co/KnutJaegersberg/Galactica-6.7B-EssayWriter) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_KnutJaegersberg__Galactica-6.7B-EssayWriter\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T16:42:22.412540](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Galactica-6.7B-EssayWriter/blob/main/results_2023-12-03T16-42-22.412540.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.034874905231235785,\n \"acc_stderr\": 0.005053480765022248\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.034874905231235785,\n \"acc_stderr\": 0.005053480765022248\n }\n}\n```", "repo_url": "https://huggingface.co/KnutJaegersberg/Galactica-6.7B-EssayWriter", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T16_42_22.412540", "path": ["**/details_harness|gsm8k|5_2023-12-03T16-42-22.412540.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T16-42-22.412540.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T16_42_22.412540", "path": ["results_2023-12-03T16-42-22.412540.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T16-42-22.412540.parquet"]}]}]} | 2023-12-03T16:42:29+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of KnutJaegersberg/Galactica-6.7B-EssayWriter
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model KnutJaegersberg/Galactica-6.7B-EssayWriter on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T16:42:22.412540(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of KnutJaegersberg/Galactica-6.7B-EssayWriter",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model KnutJaegersberg/Galactica-6.7B-EssayWriter on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T16:42:22.412540(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of KnutJaegersberg/Galactica-6.7B-EssayWriter",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model KnutJaegersberg/Galactica-6.7B-EssayWriter on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T16:42:22.412540(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
27,
31,
176,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of KnutJaegersberg/Galactica-6.7B-EssayWriter## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model KnutJaegersberg/Galactica-6.7B-EssayWriter on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T16:42:22.412540(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
5d98df2a5d5ef3d5a7f038271192dd00cdfeb580 | This repository contains preprocessed data from the paper *CROMA: Remote Sensing Representations with Contrastive Radar-Optical Masked Autoencoders* published at NeurIPS 2023.
We did not create these datasets—if you use them, please cite the original papers!
All data is stored as PyTorch tensors (inside python dictionaries); images are normalized 8-bit integers. To use these data with CROMA, convert tensors to floats and divide by 255.
**DFC 2020**
```python
dfc = torch.load("DFC_preprocessed.pt") # pixel annotations with 8 classes
train_images = dfc["train_images"] # shape (46_152, 14, 96, 96), first 12 channels are S2, last 2 channels are S1
train_labels = dfc["train_labels"] # shape (46_152, 96, 96)
validation_images = dfc["validation_images"] # shape (8_874, 14, 96, 96), first 12 channels are S2, last 2 channels are S1
validation_labels = dfc["validation_labels"] # shape (8_874, 96, 96)
```
```bib
Naoto Yokoya, Pedram Ghamisi, Ronny Hänsch, and Michael Schmitt.
2020 IEEE GRSS Data Fusion Contest: Global Land Cover Mapping With Weak Supervision.
IEEE Geoscience and Remote Sensing Magazine, 2020.
```
**Dynamic World (DW) — Expert**
```python
dw = torch.load("DynamicWorld_Expert_preprocessed.pt") # pixel annotations with 9 classes (-1 as not-labeled)
train_images = dw["train_images"] # shape (20_422, 14, 96, 96), first 12 channels are S2, last 2 channels are S1
train_labels = dw["train_labels"] # shape (20_422, 96, 96)
validation_images = dw["val_images"] # shape (51_022, 14, 96, 96), first 12 channels are S2, last 2 channels are S1
validation_labels = dw["val_labels"] # shape (51_022, 96, 96)
```
```bib
Christopher F Brown, Steven P Brumby, Brookie Guzder-Williams, Tanya Birch, Samantha Brooks Hyde,
Joseph Mazzariello, Wanda Czerwinski, Valerie J Pasquarella, Robert Haertel, Simon Ilyushchenko, et al.
Dynamic World, Near real-time global 10 m land use land cover mapping.
Scientific Data, 2022.
```
**MARIDA**
```python
marida = torch.load("MARIDA_preprocessed.pt") # pixel annotations with 15 classes (-1 as not-labeled)
train_images = marida["train_images"] # shape (1_682, 11, 96, 96)
train_labels = marida["train_labels"] # shape (1_682, 96, 96)
validation_images = marida["validation_images"] # shape (1_615, 11, 96, 96)
validation_labels = marida["validation_labels"] # shape (1_615, 96, 96)
```
```bib
Katerina Kikaki, Ioannis Kakogeorgiou, Paraskevi Mikeli, Dionysios E Raitsos, and Konstantinos Karantzalos.
MARIDA: A benchmark for Marine Debris detection from Sentinel-2 remote sensing data.
PloS one, 2022.
```
**Canadian Cropland**
```python
crops = torch.load("Canadian_Cropland_preprocessed.pt") # image annotations with 10 classes
# 2017
train_images_2017 = crops["2017"]["train_imgs"] # shape (9_898, 13, 65, 65)
train_labels_2017 = crops["2017"]["train_labels"] # shape (9_898)
validation_images_2017 = crops["2017"]["val_imgs"] # shape (2_075, 13, 65, 65)
validation_labels_2017 = crops["2017"]["val_labels"] # shape (2_075)
test_images_2017 = crops["2017"]["test_imgs"] # shape (2_138, 13, 65, 65)
test_labels_2017 = crops["2017"]["test_labels"] # shape (2_138)
# 2018
train_images_2018 = crops["2018"]["train_imgs"] # shape (12_789, 13, 65, 65)
train_labels_2018 = crops["2018"]["train_labels"] # shape (12_789)
validation_images_2018 = crops["2018"]["val_imgs"] # shape (2_714, 13, 65, 65)
validation_labels_2018 = crops["2018"]["val_labels"] # shape (2_714)
test_images_2018 = crops["2018"]["test_imgs"] # shape (2_822, 13, 65, 65)
test_labels_2018 = crops["2018"]["test_labels"] # shape (2_822)
# 2019a
train_images_2019a = crops["2019a"]["train_imgs"] # shape (11_628, 12, 65, 65)
train_labels_2019a = crops["2019a"]["train_labels"] # shape (11_628)
validation_images_2019a = crops["2019a"]["val_imgs"] # shape (2_486, 12, 65, 65)
validation_labels_2019a = crops["2019a"]["val_labels"] # shape (2_486)
test_images_2019a = crops["2019a"]["test_imgs"] # shape (2_517, 12, 65, 65)
test_labels_2019a = crops["2019a"]["test_labels"] # shape (2_517)
# 2019b
train_images_2019b = crops["2019b"]["train_imgs"] # shape (4_094, 13, 65, 65)
train_labels_2019b = crops["2019b"]["train_labels"] # shape (4_094)
validation_images_2019b = crops["2019b"]["val_imgs"] # shape (821, 13, 65, 65)
validation_labels_2019b = crops["2019b"]["val_labels"] # shape (821)
test_images_2019b = crops["2019b"]["test_imgs"] # shape (858, 13, 65, 65)
test_labels_2019b = crops["2019b"]["test_labels"] # shape (858)
# 2020
train_images_2020 = crops["2020"]["train_imgs"] # shape (15_475, 12, 65, 65)
train_labels_2020 = crops["2020"]["train_labels"] # shape (15_475)
validation_images_2020 = crops["2020"]["val_imgs"] # shape (3_318, 12, 65, 65)
validation_labels_2020 = crops["2020"]["val_labels"] # shape (3_318)
test_images_2020 = crops["2020"]["test_imgs"] # shape (3_339, 12, 65, 65)
test_labels_2020 = crops["2020"]["test_labels"] # shape (3_339)
```
```bib
Amanda A Boatswain Jacques, Abdoulaye Baniré Diallo, and Etienne Lord.
Towards the Creation of a Canadian Land-Use Dataset for Agricultural Land Classification.
In 42nd Canadian Symposium on Remote Sensing: Understanding Our World: Remote Sensing for a Sustainable Future, 2021.
``` | antofuller/CROMA_benchmarks | [
"region:us"
] | 2023-12-03T16:45:40+00:00 | {} | 2023-12-03T19:31:21+00:00 | [] | [] | TAGS
#region-us
| This repository contains preprocessed data from the paper *CROMA: Remote Sensing Representations with Contrastive Radar-Optical Masked Autoencoders* published at NeurIPS 2023.
We did not create these datasets—if you use them, please cite the original papers!
All data is stored as PyTorch tensors (inside python dictionaries); images are normalized 8-bit integers. To use these data with CROMA, convert tensors to floats and divide by 255.
DFC 2020
Dynamic World (DW) — Expert
MARIDA
Canadian Cropland
| [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
777b36ad7023769fa3f35a0b87b0e25f059127a1 |
# Dataset Card for Evaluation run of speechlessai/speechless-coding-7b-16k-tora
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/speechlessai/speechless-coding-7b-16k-tora
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [speechlessai/speechless-coding-7b-16k-tora](https://huggingface.co/speechlessai/speechless-coding-7b-16k-tora) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_speechlessai__speechless-coding-7b-16k-tora",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T17:00:54.290226](https://huggingface.co/datasets/open-llm-leaderboard/details_speechlessai__speechless-coding-7b-16k-tora/blob/main/results_2023-12-03T17-00-54.290226.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.17058377558756635,
"acc_stderr": 0.010360898504733313
},
"harness|gsm8k|5": {
"acc": 0.17058377558756635,
"acc_stderr": 0.010360898504733313
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_speechlessai__speechless-coding-7b-16k-tora | [
"region:us"
] | 2023-12-03T17:00:57+00:00 | {"pretty_name": "Evaluation run of speechlessai/speechless-coding-7b-16k-tora", "dataset_summary": "Dataset automatically created during the evaluation run of model [speechlessai/speechless-coding-7b-16k-tora](https://huggingface.co/speechlessai/speechless-coding-7b-16k-tora) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_speechlessai__speechless-coding-7b-16k-tora\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T17:00:54.290226](https://huggingface.co/datasets/open-llm-leaderboard/details_speechlessai__speechless-coding-7b-16k-tora/blob/main/results_2023-12-03T17-00-54.290226.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.17058377558756635,\n \"acc_stderr\": 0.010360898504733313\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.17058377558756635,\n \"acc_stderr\": 0.010360898504733313\n }\n}\n```", "repo_url": "https://huggingface.co/speechlessai/speechless-coding-7b-16k-tora", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T17_00_54.290226", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-00-54.290226.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-00-54.290226.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T17_00_54.290226", "path": ["results_2023-12-03T17-00-54.290226.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T17-00-54.290226.parquet"]}]}]} | 2023-12-03T17:01:01+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of speechlessai/speechless-coding-7b-16k-tora
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model speechlessai/speechless-coding-7b-16k-tora on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T17:00:54.290226(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of speechlessai/speechless-coding-7b-16k-tora",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model speechlessai/speechless-coding-7b-16k-tora on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T17:00:54.290226(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of speechlessai/speechless-coding-7b-16k-tora",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model speechlessai/speechless-coding-7b-16k-tora on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T17:00:54.290226(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
25,
31,
174,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of speechlessai/speechless-coding-7b-16k-tora## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model speechlessai/speechless-coding-7b-16k-tora on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T17:00:54.290226(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
950563052129b0bfca264da12aa348204d3cbeb8 |
# Dataset Card for Evaluation run of NousResearch/Yarn-Mistral-7b-128k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NousResearch__Yarn-Mistral-7b-128k",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T17:07:28.766997](https://huggingface.co/datasets/open-llm-leaderboard/details_NousResearch__Yarn-Mistral-7b-128k/blob/main/results_2023-12-03T17-07-28.766997.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.3260045489006823,
"acc_stderr": 0.012911675645682847
},
"harness|gsm8k|5": {
"acc": 0.3260045489006823,
"acc_stderr": 0.012911675645682847
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_NousResearch__Yarn-Mistral-7b-128k | [
"region:us"
] | 2023-12-03T17:07:10+00:00 | {"pretty_name": "Evaluation run of NousResearch/Yarn-Mistral-7b-128k", "dataset_summary": "Dataset automatically created during the evaluation run of model [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NousResearch__Yarn-Mistral-7b-128k\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T17:07:28.766997](https://huggingface.co/datasets/open-llm-leaderboard/details_NousResearch__Yarn-Mistral-7b-128k/blob/main/results_2023-12-03T17-07-28.766997.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3260045489006823,\n \"acc_stderr\": 0.012911675645682847\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3260045489006823,\n \"acc_stderr\": 0.012911675645682847\n }\n}\n```", "repo_url": "https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T17_07_07.837997", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-07-07.837997.parquet"]}, {"split": "2023_12_03T17_07_28.766997", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-07-28.766997.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-07-28.766997.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T17_07_07.837997", "path": ["results_2023-12-03T17-07-07.837997.parquet"]}, {"split": "2023_12_03T17_07_28.766997", "path": ["results_2023-12-03T17-07-28.766997.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T17-07-28.766997.parquet"]}]}]} | 2023-12-03T17:07:35+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of NousResearch/Yarn-Mistral-7b-128k
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model NousResearch/Yarn-Mistral-7b-128k on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T17:07:28.766997(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of NousResearch/Yarn-Mistral-7b-128k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NousResearch/Yarn-Mistral-7b-128k on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T17:07:28.766997(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of NousResearch/Yarn-Mistral-7b-128k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NousResearch/Yarn-Mistral-7b-128k on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T17:07:28.766997(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
24,
31,
173,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of NousResearch/Yarn-Mistral-7b-128k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model NousResearch/Yarn-Mistral-7b-128k on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T17:07:28.766997(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
c44d608c8aab56e3e7eabdd417c46012e7687ab8 |
# Dataset Card for Evaluation run of NousResearch/Yarn-Mistral-7b-64k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NousResearch__Yarn-Mistral-7b-64k",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T17:07:49.588193](https://huggingface.co/datasets/open-llm-leaderboard/details_NousResearch__Yarn-Mistral-7b-64k/blob/main/results_2023-12-03T17-07-49.588193.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.332827899924185,
"acc_stderr": 0.012979892496598278
},
"harness|gsm8k|5": {
"acc": 0.332827899924185,
"acc_stderr": 0.012979892496598278
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_NousResearch__Yarn-Mistral-7b-64k | [
"region:us"
] | 2023-12-03T17:07:52+00:00 | {"pretty_name": "Evaluation run of NousResearch/Yarn-Mistral-7b-64k", "dataset_summary": "Dataset automatically created during the evaluation run of model [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NousResearch__Yarn-Mistral-7b-64k\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T17:07:49.588193](https://huggingface.co/datasets/open-llm-leaderboard/details_NousResearch__Yarn-Mistral-7b-64k/blob/main/results_2023-12-03T17-07-49.588193.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.332827899924185,\n \"acc_stderr\": 0.012979892496598278\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.332827899924185,\n \"acc_stderr\": 0.012979892496598278\n }\n}\n```", "repo_url": "https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T17_07_49.588193", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-07-49.588193.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-07-49.588193.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T17_07_49.588193", "path": ["results_2023-12-03T17-07-49.588193.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T17-07-49.588193.parquet"]}]}]} | 2023-12-03T17:07:56+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of NousResearch/Yarn-Mistral-7b-64k
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model NousResearch/Yarn-Mistral-7b-64k on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T17:07:49.588193(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of NousResearch/Yarn-Mistral-7b-64k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NousResearch/Yarn-Mistral-7b-64k on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T17:07:49.588193(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of NousResearch/Yarn-Mistral-7b-64k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NousResearch/Yarn-Mistral-7b-64k on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T17:07:49.588193(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
24,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of NousResearch/Yarn-Mistral-7b-64k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model NousResearch/Yarn-Mistral-7b-64k on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T17:07:49.588193(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
9090d71ef99534528a955fc2c902e11318f6d9c5 |
# Dataset Card for Evaluation run of NEU-HAI/mental-alpaca
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/NEU-HAI/mental-alpaca
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [NEU-HAI/mental-alpaca](https://huggingface.co/NEU-HAI/mental-alpaca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NEU-HAI__mental-alpaca",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T17:17:34.237922](https://huggingface.co/datasets/open-llm-leaderboard/details_NEU-HAI__mental-alpaca/blob/main/results_2023-12-03T17-17-34.237922.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_NEU-HAI__mental-alpaca | [
"region:us"
] | 2023-12-03T17:17:37+00:00 | {"pretty_name": "Evaluation run of NEU-HAI/mental-alpaca", "dataset_summary": "Dataset automatically created during the evaluation run of model [NEU-HAI/mental-alpaca](https://huggingface.co/NEU-HAI/mental-alpaca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NEU-HAI__mental-alpaca\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T17:17:34.237922](https://huggingface.co/datasets/open-llm-leaderboard/details_NEU-HAI__mental-alpaca/blob/main/results_2023-12-03T17-17-34.237922.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/NEU-HAI/mental-alpaca", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T17_17_34.237922", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-17-34.237922.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-17-34.237922.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T17_17_34.237922", "path": ["results_2023-12-03T17-17-34.237922.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T17-17-34.237922.parquet"]}]}]} | 2023-12-03T17:17:41+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of NEU-HAI/mental-alpaca
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model NEU-HAI/mental-alpaca on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T17:17:34.237922(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of NEU-HAI/mental-alpaca",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NEU-HAI/mental-alpaca on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T17:17:34.237922(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of NEU-HAI/mental-alpaca",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NEU-HAI/mental-alpaca on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T17:17:34.237922(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
19,
31,
168,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of NEU-HAI/mental-alpaca## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model NEU-HAI/mental-alpaca on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T17:17:34.237922(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
ee01a8a0a76abb22265f16897621122f541baabf |
# Dataset Card for Evaluation run of OrionStarAI/OrionStar-Yi-34B-Chat-Llama
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_OrionStarAI__OrionStar-Yi-34B-Chat-Llama",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T18:22:03.358595](https://huggingface.co/datasets/open-llm-leaderboard/details_OrionStarAI__OrionStar-Yi-34B-Chat-Llama/blob/main/results_2023-12-03T18-22-03.358595.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5390447308567097,
"acc_stderr": 0.013730428449116344
},
"harness|gsm8k|5": {
"acc": 0.5390447308567097,
"acc_stderr": 0.013730428449116344
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_OrionStarAI__OrionStar-Yi-34B-Chat-Llama | [
"region:us"
] | 2023-12-03T17:19:22+00:00 | {"pretty_name": "Evaluation run of OrionStarAI/OrionStar-Yi-34B-Chat-Llama", "dataset_summary": "Dataset automatically created during the evaluation run of model [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OrionStarAI__OrionStar-Yi-34B-Chat-Llama\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T18:22:03.358595](https://huggingface.co/datasets/open-llm-leaderboard/details_OrionStarAI__OrionStar-Yi-34B-Chat-Llama/blob/main/results_2023-12-03T18-22-03.358595.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5390447308567097,\n \"acc_stderr\": 0.013730428449116344\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5390447308567097,\n \"acc_stderr\": 0.013730428449116344\n }\n}\n```", "repo_url": "https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T17_19_19.971847", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-19-19.971847.parquet"]}, {"split": "2023_12_03T17_20_20.086635", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-20-20.086635.parquet"]}, {"split": "2023_12_03T18_21_56.763818", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-21-56.763818.parquet"]}, {"split": "2023_12_03T18_22_03.358595", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-22-03.358595.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-22-03.358595.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T17_19_19.971847", "path": ["results_2023-12-03T17-19-19.971847.parquet"]}, {"split": "2023_12_03T17_20_20.086635", "path": ["results_2023-12-03T17-20-20.086635.parquet"]}, {"split": "2023_12_03T18_21_56.763818", "path": ["results_2023-12-03T18-21-56.763818.parquet"]}, {"split": "2023_12_03T18_22_03.358595", "path": ["results_2023-12-03T18-22-03.358595.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T18-22-03.358595.parquet"]}]}]} | 2023-12-03T18:22:09+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of OrionStarAI/OrionStar-Yi-34B-Chat-Llama
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model OrionStarAI/OrionStar-Yi-34B-Chat-Llama on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T18:22:03.358595(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of OrionStarAI/OrionStar-Yi-34B-Chat-Llama",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model OrionStarAI/OrionStar-Yi-34B-Chat-Llama on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:22:03.358595(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of OrionStarAI/OrionStar-Yi-34B-Chat-Llama",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model OrionStarAI/OrionStar-Yi-34B-Chat-Llama on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:22:03.358595(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
26,
31,
175,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of OrionStarAI/OrionStar-Yi-34B-Chat-Llama## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model OrionStarAI/OrionStar-Yi-34B-Chat-Llama on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T18:22:03.358595(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
a64112e3b2037f6599663e1b58803c6da3f6f046 |
# Dataset Card for Evaluation run of HiTZ/GoLLIE-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/HiTZ/GoLLIE-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [HiTZ/GoLLIE-7B](https://huggingface.co/HiTZ/GoLLIE-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_HiTZ__GoLLIE-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T17:27:35.795751](https://huggingface.co/datasets/open-llm-leaderboard/details_HiTZ__GoLLIE-7B/blob/main/results_2023-12-03T17-27-35.795751.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.032600454890068235,
"acc_stderr": 0.004891669021939591
},
"harness|gsm8k|5": {
"acc": 0.032600454890068235,
"acc_stderr": 0.004891669021939591
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_HiTZ__GoLLIE-7B | [
"region:us"
] | 2023-12-03T17:27:38+00:00 | {"pretty_name": "Evaluation run of HiTZ/GoLLIE-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [HiTZ/GoLLIE-7B](https://huggingface.co/HiTZ/GoLLIE-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_HiTZ__GoLLIE-7B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T17:27:35.795751](https://huggingface.co/datasets/open-llm-leaderboard/details_HiTZ__GoLLIE-7B/blob/main/results_2023-12-03T17-27-35.795751.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.032600454890068235,\n \"acc_stderr\": 0.004891669021939591\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.032600454890068235,\n \"acc_stderr\": 0.004891669021939591\n }\n}\n```", "repo_url": "https://huggingface.co/HiTZ/GoLLIE-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T17_27_35.795751", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-27-35.795751.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-27-35.795751.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T17_27_35.795751", "path": ["results_2023-12-03T17-27-35.795751.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T17-27-35.795751.parquet"]}]}]} | 2023-12-03T17:27:43+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of HiTZ/GoLLIE-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model HiTZ/GoLLIE-7B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T17:27:35.795751(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of HiTZ/GoLLIE-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model HiTZ/GoLLIE-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T17:27:35.795751(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of HiTZ/GoLLIE-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model HiTZ/GoLLIE-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T17:27:35.795751(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
17,
31,
166,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of HiTZ/GoLLIE-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model HiTZ/GoLLIE-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T17:27:35.795751(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
0ebc95b1190aa6c833bec26d56731261b70fd83d | # Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
[berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar) dataset reformatted for messages. Assistant response is the rank=1 response in the original dataset. | orangetin/nectar-conversation | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2023-12-03T17:30:23+00:00 | {"language": ["en"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["conversational", "text-generation"], "pretty_name": "Nectar - Conversational"} | 2023-12-03T17:42:41+00:00 | [] | [
"en"
] | TAGS
#task_categories-conversational #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-cc-by-4.0 #region-us
| # Dataset Card for Dataset Name
berkeley-nest/Nectar dataset reformatted for messages. Assistant response is the rank=1 response in the original dataset. | [
"# Dataset Card for Dataset Name\n\n\n\nberkeley-nest/Nectar dataset reformatted for messages. Assistant response is the rank=1 response in the original dataset."
] | [
"TAGS\n#task_categories-conversational #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nberkeley-nest/Nectar dataset reformatted for messages. Assistant response is the rank=1 response in the original dataset."
] | [
52,
38
] | [
"passage: TAGS\n#task_categories-conversational #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-cc-by-4.0 #region-us \n# Dataset Card for Dataset Name\n\n\n\nberkeley-nest/Nectar dataset reformatted for messages. Assistant response is the rank=1 response in the original dataset."
] |
4ad85560ee5dc7e883d092244f2b1bd1bc7e5ed7 |
# Dataset Card for Evaluation run of FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_FPHam__Karen_TheEditor_V2_STRICT_Mistral_7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T17:38:53.248093](https://huggingface.co/datasets/open-llm-leaderboard/details_FPHam__Karen_TheEditor_V2_STRICT_Mistral_7B/blob/main/results_2023-12-03T17-38-53.248093.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.3017437452615618,
"acc_stderr": 0.012643544762873351
},
"harness|gsm8k|5": {
"acc": 0.3017437452615618,
"acc_stderr": 0.012643544762873351
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_FPHam__Karen_TheEditor_V2_STRICT_Mistral_7B | [
"region:us"
] | 2023-12-03T17:38:57+00:00 | {"pretty_name": "Evaluation run of FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_FPHam__Karen_TheEditor_V2_STRICT_Mistral_7B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T17:38:53.248093](https://huggingface.co/datasets/open-llm-leaderboard/details_FPHam__Karen_TheEditor_V2_STRICT_Mistral_7B/blob/main/results_2023-12-03T17-38-53.248093.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3017437452615618,\n \"acc_stderr\": 0.012643544762873351\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3017437452615618,\n \"acc_stderr\": 0.012643544762873351\n }\n}\n```", "repo_url": "https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T17_38_53.248093", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-38-53.248093.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-38-53.248093.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T17_38_53.248093", "path": ["results_2023-12-03T17-38-53.248093.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T17-38-53.248093.parquet"]}]}]} | 2023-12-03T17:39:03+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T17:38:53.248093(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T17:38:53.248093(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T17:38:53.248093(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
31,
31,
180,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T17:38:53.248093(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
cd1106307a7f0cb50d61936b13fe31e4d4a4474f |
# Dataset Card for Evaluation run of maywell/Synatra-RP-Orca-2-7b-v0.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [maywell/Synatra-RP-Orca-2-7b-v0.1](https://huggingface.co/maywell/Synatra-RP-Orca-2-7b-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_maywell__Synatra-RP-Orca-2-7b-v0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-07T22:12:28.167170](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__Synatra-RP-Orca-2-7b-v0.1/blob/main/results_2024-01-07T22-12-28.167170.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5617788388118229,
"acc_stderr": 0.03367703048955596,
"acc_norm": 0.565018654271738,
"acc_norm_stderr": 0.034368770251827,
"mc1": 0.3733170134638923,
"mc1_stderr": 0.016932370557570634,
"mc2": 0.5254804392398661,
"mc2_stderr": 0.015769818879652533
},
"harness|arc:challenge|25": {
"acc": 0.5511945392491467,
"acc_stderr": 0.014534599585097665,
"acc_norm": 0.5742320819112628,
"acc_norm_stderr": 0.014449464278868809
},
"harness|hellaswag|10": {
"acc": 0.5892252539334794,
"acc_stderr": 0.004909689876342047,
"acc_norm": 0.7730531766580363,
"acc_norm_stderr": 0.004180018992862967
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.04244633238353228,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.04244633238353228
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6710526315789473,
"acc_stderr": 0.03823428969926603,
"acc_norm": 0.6710526315789473,
"acc_norm_stderr": 0.03823428969926603
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6037735849056604,
"acc_stderr": 0.030102793781791197,
"acc_norm": 0.6037735849056604,
"acc_norm_stderr": 0.030102793781791197
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6041666666666666,
"acc_stderr": 0.04089465449325582,
"acc_norm": 0.6041666666666666,
"acc_norm_stderr": 0.04089465449325582
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.44,
"acc_stderr": 0.049888765156985884,
"acc_norm": 0.44,
"acc_norm_stderr": 0.049888765156985884
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5028901734104047,
"acc_stderr": 0.038124005659748335,
"acc_norm": 0.5028901734104047,
"acc_norm_stderr": 0.038124005659748335
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.043898699568087785,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.043898699568087785
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4765957446808511,
"acc_stderr": 0.032650194750335815,
"acc_norm": 0.4765957446808511,
"acc_norm_stderr": 0.032650194750335815
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.34210526315789475,
"acc_stderr": 0.04462917535336936,
"acc_norm": 0.34210526315789475,
"acc_norm_stderr": 0.04462917535336936
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.04164188720169375,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.04164188720169375
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3306878306878307,
"acc_stderr": 0.02422996529842507,
"acc_norm": 0.3306878306878307,
"acc_norm_stderr": 0.02422996529842507
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.36507936507936506,
"acc_stderr": 0.04306241259127153,
"acc_norm": 0.36507936507936506,
"acc_norm_stderr": 0.04306241259127153
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6096774193548387,
"acc_stderr": 0.027751256636969576,
"acc_norm": 0.6096774193548387,
"acc_norm_stderr": 0.027751256636969576
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4039408866995074,
"acc_stderr": 0.0345245390382204,
"acc_norm": 0.4039408866995074,
"acc_norm_stderr": 0.0345245390382204
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7333333333333333,
"acc_stderr": 0.03453131801885416,
"acc_norm": 0.7333333333333333,
"acc_norm_stderr": 0.03453131801885416
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7424242424242424,
"acc_stderr": 0.03115626951964683,
"acc_norm": 0.7424242424242424,
"acc_norm_stderr": 0.03115626951964683
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8290155440414507,
"acc_stderr": 0.027171213683164528,
"acc_norm": 0.8290155440414507,
"acc_norm_stderr": 0.027171213683164528
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5256410256410257,
"acc_stderr": 0.025317649726448666,
"acc_norm": 0.5256410256410257,
"acc_norm_stderr": 0.025317649726448666
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.02874204090394849,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.02874204090394849
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5210084033613446,
"acc_stderr": 0.03244980849990029,
"acc_norm": 0.5210084033613446,
"acc_norm_stderr": 0.03244980849990029
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.03879687024073327,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.03879687024073327
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7522935779816514,
"acc_stderr": 0.018508143602547832,
"acc_norm": 0.7522935779816514,
"acc_norm_stderr": 0.018508143602547832
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.033509916046960436,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.033509916046960436
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7401960784313726,
"acc_stderr": 0.03077855467869326,
"acc_norm": 0.7401960784313726,
"acc_norm_stderr": 0.03077855467869326
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7679324894514767,
"acc_stderr": 0.02747974455080851,
"acc_norm": 0.7679324894514767,
"acc_norm_stderr": 0.02747974455080851
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6502242152466368,
"acc_stderr": 0.03200736719484503,
"acc_norm": 0.6502242152466368,
"acc_norm_stderr": 0.03200736719484503
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7022900763358778,
"acc_stderr": 0.04010358942462203,
"acc_norm": 0.7022900763358778,
"acc_norm_stderr": 0.04010358942462203
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7107438016528925,
"acc_stderr": 0.041391127276354626,
"acc_norm": 0.7107438016528925,
"acc_norm_stderr": 0.041391127276354626
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.04557239513497751,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.04557239513497751
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6380368098159509,
"acc_stderr": 0.037757007291414416,
"acc_norm": 0.6380368098159509,
"acc_norm_stderr": 0.037757007291414416
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4017857142857143,
"acc_stderr": 0.04653333146973647,
"acc_norm": 0.4017857142857143,
"acc_norm_stderr": 0.04653333146973647
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8418803418803419,
"acc_stderr": 0.023902325549560396,
"acc_norm": 0.8418803418803419,
"acc_norm_stderr": 0.023902325549560396
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.768837803320562,
"acc_stderr": 0.015075523238101077,
"acc_norm": 0.768837803320562,
"acc_norm_stderr": 0.015075523238101077
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.638728323699422,
"acc_stderr": 0.025862201852277906,
"acc_norm": 0.638728323699422,
"acc_norm_stderr": 0.025862201852277906
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2759776536312849,
"acc_stderr": 0.014950103002475361,
"acc_norm": 0.2759776536312849,
"acc_norm_stderr": 0.014950103002475361
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6209150326797386,
"acc_stderr": 0.027780141207023344,
"acc_norm": 0.6209150326797386,
"acc_norm_stderr": 0.027780141207023344
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6527331189710611,
"acc_stderr": 0.027040745502307336,
"acc_norm": 0.6527331189710611,
"acc_norm_stderr": 0.027040745502307336
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.654320987654321,
"acc_stderr": 0.02646248777700187,
"acc_norm": 0.654320987654321,
"acc_norm_stderr": 0.02646248777700187
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.3723404255319149,
"acc_stderr": 0.02883892147125145,
"acc_norm": 0.3723404255319149,
"acc_norm_stderr": 0.02883892147125145
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3970013037809648,
"acc_stderr": 0.012496346982909556,
"acc_norm": 0.3970013037809648,
"acc_norm_stderr": 0.012496346982909556
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4742647058823529,
"acc_stderr": 0.030332578094555033,
"acc_norm": 0.4742647058823529,
"acc_norm_stderr": 0.030332578094555033
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.553921568627451,
"acc_stderr": 0.020109864547181354,
"acc_norm": 0.553921568627451,
"acc_norm_stderr": 0.020109864547181354
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6,
"acc_stderr": 0.0469237132203465,
"acc_norm": 0.6,
"acc_norm_stderr": 0.0469237132203465
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.673469387755102,
"acc_stderr": 0.030021056238440313,
"acc_norm": 0.673469387755102,
"acc_norm_stderr": 0.030021056238440313
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6318407960199005,
"acc_stderr": 0.034104105654953025,
"acc_norm": 0.6318407960199005,
"acc_norm_stderr": 0.034104105654953025
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-virology|5": {
"acc": 0.463855421686747,
"acc_stderr": 0.03882310850890594,
"acc_norm": 0.463855421686747,
"acc_norm_stderr": 0.03882310850890594
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7602339181286549,
"acc_stderr": 0.03274485211946956,
"acc_norm": 0.7602339181286549,
"acc_norm_stderr": 0.03274485211946956
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3733170134638923,
"mc1_stderr": 0.016932370557570634,
"mc2": 0.5254804392398661,
"mc2_stderr": 0.015769818879652533
},
"harness|winogrande|5": {
"acc": 0.744277821625888,
"acc_stderr": 0.012261253845440474
},
"harness|gsm8k|5": {
"acc": 0.3949962092494314,
"acc_stderr": 0.01346535496997321
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_maywell__Synatra-RP-Orca-2-7b-v0.1 | [
"region:us"
] | 2023-12-03T17:42:02+00:00 | {"pretty_name": "Evaluation run of maywell/Synatra-RP-Orca-2-7b-v0.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [maywell/Synatra-RP-Orca-2-7b-v0.1](https://huggingface.co/maywell/Synatra-RP-Orca-2-7b-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_maywell__Synatra-RP-Orca-2-7b-v0.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-07T22:12:28.167170](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__Synatra-RP-Orca-2-7b-v0.1/blob/main/results_2024-01-07T22-12-28.167170.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5617788388118229,\n \"acc_stderr\": 0.03367703048955596,\n \"acc_norm\": 0.565018654271738,\n \"acc_norm_stderr\": 0.034368770251827,\n \"mc1\": 0.3733170134638923,\n \"mc1_stderr\": 0.016932370557570634,\n \"mc2\": 0.5254804392398661,\n \"mc2_stderr\": 0.015769818879652533\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5511945392491467,\n \"acc_stderr\": 0.014534599585097665,\n \"acc_norm\": 0.5742320819112628,\n \"acc_norm_stderr\": 0.014449464278868809\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5892252539334794,\n \"acc_stderr\": 0.004909689876342047,\n \"acc_norm\": 0.7730531766580363,\n \"acc_norm_stderr\": 0.004180018992862967\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.04244633238353228,\n \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.04244633238353228\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.03823428969926603,\n \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.03823428969926603\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6037735849056604,\n \"acc_stderr\": 0.030102793781791197,\n \"acc_norm\": 0.6037735849056604,\n \"acc_norm_stderr\": 0.030102793781791197\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6041666666666666,\n \"acc_stderr\": 0.04089465449325582,\n \"acc_norm\": 0.6041666666666666,\n \"acc_norm_stderr\": 0.04089465449325582\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.049888765156985884,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.049888765156985884\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5028901734104047,\n \"acc_stderr\": 0.038124005659748335,\n \"acc_norm\": 0.5028901734104047,\n \"acc_norm_stderr\": 0.038124005659748335\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.043898699568087785,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.043898699568087785\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.4765957446808511,\n \"acc_stderr\": 0.032650194750335815,\n \"acc_norm\": 0.4765957446808511,\n \"acc_norm_stderr\": 0.032650194750335815\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.34210526315789475,\n \"acc_stderr\": 0.04462917535336936,\n \"acc_norm\": 0.34210526315789475,\n \"acc_norm_stderr\": 0.04462917535336936\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.04164188720169375,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.04164188720169375\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3306878306878307,\n \"acc_stderr\": 0.02422996529842507,\n \"acc_norm\": 0.3306878306878307,\n \"acc_norm_stderr\": 0.02422996529842507\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.36507936507936506,\n \"acc_stderr\": 0.04306241259127153,\n \"acc_norm\": 0.36507936507936506,\n \"acc_norm_stderr\": 0.04306241259127153\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6096774193548387,\n \"acc_stderr\": 0.027751256636969576,\n \"acc_norm\": 0.6096774193548387,\n \"acc_norm_stderr\": 0.027751256636969576\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4039408866995074,\n \"acc_stderr\": 0.0345245390382204,\n \"acc_norm\": 0.4039408866995074,\n \"acc_norm_stderr\": 0.0345245390382204\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7333333333333333,\n \"acc_stderr\": 0.03453131801885416,\n \"acc_norm\": 0.7333333333333333,\n \"acc_norm_stderr\": 0.03453131801885416\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7424242424242424,\n \"acc_stderr\": 0.03115626951964683,\n \"acc_norm\": 0.7424242424242424,\n \"acc_norm_stderr\": 0.03115626951964683\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8290155440414507,\n \"acc_stderr\": 0.027171213683164528,\n \"acc_norm\": 0.8290155440414507,\n \"acc_norm_stderr\": 0.027171213683164528\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5256410256410257,\n \"acc_stderr\": 0.025317649726448666,\n \"acc_norm\": 0.5256410256410257,\n \"acc_norm_stderr\": 0.025317649726448666\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.02874204090394849,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.02874204090394849\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5210084033613446,\n \"acc_stderr\": 0.03244980849990029,\n \"acc_norm\": 0.5210084033613446,\n \"acc_norm_stderr\": 0.03244980849990029\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3443708609271523,\n \"acc_stderr\": 0.03879687024073327,\n \"acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.03879687024073327\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7522935779816514,\n \"acc_stderr\": 0.018508143602547832,\n \"acc_norm\": 0.7522935779816514,\n \"acc_norm_stderr\": 0.018508143602547832\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4074074074074074,\n \"acc_stderr\": 0.033509916046960436,\n \"acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.033509916046960436\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7401960784313726,\n \"acc_stderr\": 0.03077855467869326,\n \"acc_norm\": 0.7401960784313726,\n \"acc_norm_stderr\": 0.03077855467869326\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7679324894514767,\n \"acc_stderr\": 0.02747974455080851,\n \"acc_norm\": 0.7679324894514767,\n \"acc_norm_stderr\": 0.02747974455080851\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6502242152466368,\n \"acc_stderr\": 0.03200736719484503,\n \"acc_norm\": 0.6502242152466368,\n \"acc_norm_stderr\": 0.03200736719484503\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7022900763358778,\n \"acc_stderr\": 0.04010358942462203,\n \"acc_norm\": 0.7022900763358778,\n \"acc_norm_stderr\": 0.04010358942462203\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7107438016528925,\n \"acc_stderr\": 0.041391127276354626,\n \"acc_norm\": 0.7107438016528925,\n \"acc_norm_stderr\": 0.041391127276354626\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.04557239513497751,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.04557239513497751\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6380368098159509,\n \"acc_stderr\": 0.037757007291414416,\n \"acc_norm\": 0.6380368098159509,\n \"acc_norm_stderr\": 0.037757007291414416\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4017857142857143,\n \"acc_stderr\": 0.04653333146973647,\n \"acc_norm\": 0.4017857142857143,\n \"acc_norm_stderr\": 0.04653333146973647\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8418803418803419,\n \"acc_stderr\": 0.023902325549560396,\n \"acc_norm\": 0.8418803418803419,\n \"acc_norm_stderr\": 0.023902325549560396\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.768837803320562,\n \"acc_stderr\": 0.015075523238101077,\n \"acc_norm\": 0.768837803320562,\n \"acc_norm_stderr\": 0.015075523238101077\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.638728323699422,\n \"acc_stderr\": 0.025862201852277906,\n \"acc_norm\": 0.638728323699422,\n \"acc_norm_stderr\": 0.025862201852277906\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2759776536312849,\n \"acc_stderr\": 0.014950103002475361,\n \"acc_norm\": 0.2759776536312849,\n \"acc_norm_stderr\": 0.014950103002475361\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6209150326797386,\n \"acc_stderr\": 0.027780141207023344,\n \"acc_norm\": 0.6209150326797386,\n \"acc_norm_stderr\": 0.027780141207023344\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6527331189710611,\n \"acc_stderr\": 0.027040745502307336,\n \"acc_norm\": 0.6527331189710611,\n \"acc_norm_stderr\": 0.027040745502307336\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.654320987654321,\n \"acc_stderr\": 0.02646248777700187,\n \"acc_norm\": 0.654320987654321,\n \"acc_norm_stderr\": 0.02646248777700187\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.3723404255319149,\n \"acc_stderr\": 0.02883892147125145,\n \"acc_norm\": 0.3723404255319149,\n \"acc_norm_stderr\": 0.02883892147125145\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3970013037809648,\n \"acc_stderr\": 0.012496346982909556,\n \"acc_norm\": 0.3970013037809648,\n \"acc_norm_stderr\": 0.012496346982909556\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.4742647058823529,\n \"acc_stderr\": 0.030332578094555033,\n \"acc_norm\": 0.4742647058823529,\n \"acc_norm_stderr\": 0.030332578094555033\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.553921568627451,\n \"acc_stderr\": 0.020109864547181354,\n \"acc_norm\": 0.553921568627451,\n \"acc_norm_stderr\": 0.020109864547181354\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.0469237132203465,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.0469237132203465\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.673469387755102,\n \"acc_stderr\": 0.030021056238440313,\n \"acc_norm\": 0.673469387755102,\n \"acc_norm_stderr\": 0.030021056238440313\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6318407960199005,\n \"acc_stderr\": 0.034104105654953025,\n \"acc_norm\": 0.6318407960199005,\n \"acc_norm_stderr\": 0.034104105654953025\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932263\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.463855421686747,\n \"acc_stderr\": 0.03882310850890594,\n \"acc_norm\": 0.463855421686747,\n \"acc_norm_stderr\": 0.03882310850890594\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7602339181286549,\n \"acc_stderr\": 0.03274485211946956,\n \"acc_norm\": 0.7602339181286549,\n \"acc_norm_stderr\": 0.03274485211946956\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3733170134638923,\n \"mc1_stderr\": 0.016932370557570634,\n \"mc2\": 0.5254804392398661,\n \"mc2_stderr\": 0.015769818879652533\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.744277821625888,\n \"acc_stderr\": 0.012261253845440474\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3949962092494314,\n \"acc_stderr\": 0.01346535496997321\n }\n}\n```", "repo_url": "https://huggingface.co/maywell/Synatra-RP-Orca-2-7b-v0.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|arc:challenge|25_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T17_42_00.048220", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-42-00.048220.parquet"]}, {"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|gsm8k|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hellaswag|10_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-07T22-12-28.167170.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_07T22_12_28.167170", "path": ["**/details_harness|winogrande|5_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-07T22-12-28.167170.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T17_42_00.048220", "path": ["results_2023-12-03T17-42-00.048220.parquet"]}, {"split": "2024_01_07T22_12_28.167170", "path": ["results_2024-01-07T22-12-28.167170.parquet"]}, {"split": "latest", "path": ["results_2024-01-07T22-12-28.167170.parquet"]}]}]} | 2024-01-07T22:15:12+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of maywell/Synatra-RP-Orca-2-7b-v0.1
Dataset automatically created during the evaluation run of model maywell/Synatra-RP-Orca-2-7b-v0.1 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-07T22:12:28.167170(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of maywell/Synatra-RP-Orca-2-7b-v0.1\n\n\n\nDataset automatically created during the evaluation run of model maywell/Synatra-RP-Orca-2-7b-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-07T22:12:28.167170(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of maywell/Synatra-RP-Orca-2-7b-v0.1\n\n\n\nDataset automatically created during the evaluation run of model maywell/Synatra-RP-Orca-2-7b-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-07T22:12:28.167170(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
195,
67,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of maywell/Synatra-RP-Orca-2-7b-v0.1\n\n\n\nDataset automatically created during the evaluation run of model maywell/Synatra-RP-Orca-2-7b-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-07T22:12:28.167170(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]"
] |
d473b379c5df10f643e834ecb9c11ec57e13e8da |
# Dataset Card for Evaluation run of tianlinliu0121/zephyr-7b-dpo-full-beta-0.2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/tianlinliu0121/zephyr-7b-dpo-full-beta-0.2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [tianlinliu0121/zephyr-7b-dpo-full-beta-0.2](https://huggingface.co/tianlinliu0121/zephyr-7b-dpo-full-beta-0.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_tianlinliu0121__zephyr-7b-dpo-full-beta-0.2",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T18:01:25.798597](https://huggingface.co/datasets/open-llm-leaderboard/details_tianlinliu0121__zephyr-7b-dpo-full-beta-0.2/blob/main/results_2023-12-03T18-01-25.798597.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.30022744503411675,
"acc_stderr": 0.012625423152283044
},
"harness|gsm8k|5": {
"acc": 0.30022744503411675,
"acc_stderr": 0.012625423152283044
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_tianlinliu0121__zephyr-7b-dpo-full-beta-0.2 | [
"region:us"
] | 2023-12-03T17:57:43+00:00 | {"pretty_name": "Evaluation run of tianlinliu0121/zephyr-7b-dpo-full-beta-0.2", "dataset_summary": "Dataset automatically created during the evaluation run of model [tianlinliu0121/zephyr-7b-dpo-full-beta-0.2](https://huggingface.co/tianlinliu0121/zephyr-7b-dpo-full-beta-0.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_tianlinliu0121__zephyr-7b-dpo-full-beta-0.2\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T18:01:25.798597](https://huggingface.co/datasets/open-llm-leaderboard/details_tianlinliu0121__zephyr-7b-dpo-full-beta-0.2/blob/main/results_2023-12-03T18-01-25.798597.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.30022744503411675,\n \"acc_stderr\": 0.012625423152283044\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.30022744503411675,\n \"acc_stderr\": 0.012625423152283044\n }\n}\n```", "repo_url": "https://huggingface.co/tianlinliu0121/zephyr-7b-dpo-full-beta-0.2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T17_57_40.359102", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-57-40.359102.parquet"]}, {"split": "2023_12_03T17_57_42.100432", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-57-42.100432.parquet"]}, {"split": "2023_12_03T18_01_20.411431", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-01-20.411431.parquet"]}, {"split": "2023_12_03T18_01_25.798597", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-01-25.798597.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-01-25.798597.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T17_57_40.359102", "path": ["results_2023-12-03T17-57-40.359102.parquet"]}, {"split": "2023_12_03T17_57_42.100432", "path": ["results_2023-12-03T17-57-42.100432.parquet"]}, {"split": "2023_12_03T18_01_20.411431", "path": ["results_2023-12-03T18-01-20.411431.parquet"]}, {"split": "2023_12_03T18_01_25.798597", "path": ["results_2023-12-03T18-01-25.798597.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T18-01-25.798597.parquet"]}]}]} | 2023-12-03T18:01:32+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of tianlinliu0121/zephyr-7b-dpo-full-beta-0.2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model tianlinliu0121/zephyr-7b-dpo-full-beta-0.2 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T18:01:25.798597(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of tianlinliu0121/zephyr-7b-dpo-full-beta-0.2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model tianlinliu0121/zephyr-7b-dpo-full-beta-0.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:01:25.798597(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of tianlinliu0121/zephyr-7b-dpo-full-beta-0.2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model tianlinliu0121/zephyr-7b-dpo-full-beta-0.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:01:25.798597(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
29,
31,
178,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of tianlinliu0121/zephyr-7b-dpo-full-beta-0.2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model tianlinliu0121/zephyr-7b-dpo-full-beta-0.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T18:01:25.798597(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
79d7ff5794055e3ce0604e9c7cc6ba0cd66cd7e5 |
# Dataset Card for Evaluation run of maywell/Mini_Synatra_SFT
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/maywell/Mini_Synatra_SFT
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [maywell/Mini_Synatra_SFT](https://huggingface.co/maywell/Mini_Synatra_SFT) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_maywell__Mini_Synatra_SFT",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T18:00:26.162132](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__Mini_Synatra_SFT/blob/main/results_2023-12-03T18-00-26.162132.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4488248673237301,
"acc_stderr": 0.013700157442788066
},
"harness|gsm8k|5": {
"acc": 0.4488248673237301,
"acc_stderr": 0.013700157442788066
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_maywell__Mini_Synatra_SFT | [
"region:us"
] | 2023-12-03T18:00:28+00:00 | {"pretty_name": "Evaluation run of maywell/Mini_Synatra_SFT", "dataset_summary": "Dataset automatically created during the evaluation run of model [maywell/Mini_Synatra_SFT](https://huggingface.co/maywell/Mini_Synatra_SFT) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_maywell__Mini_Synatra_SFT\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T18:00:26.162132](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__Mini_Synatra_SFT/blob/main/results_2023-12-03T18-00-26.162132.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4488248673237301,\n \"acc_stderr\": 0.013700157442788066\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4488248673237301,\n \"acc_stderr\": 0.013700157442788066\n }\n}\n```", "repo_url": "https://huggingface.co/maywell/Mini_Synatra_SFT", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T18_00_26.162132", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-00-26.162132.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-00-26.162132.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T18_00_26.162132", "path": ["results_2023-12-03T18-00-26.162132.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T18-00-26.162132.parquet"]}]}]} | 2023-12-03T18:00:33+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of maywell/Mini_Synatra_SFT
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model maywell/Mini_Synatra_SFT on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T18:00:26.162132(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of maywell/Mini_Synatra_SFT",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/Mini_Synatra_SFT on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:00:26.162132(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of maywell/Mini_Synatra_SFT",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/Mini_Synatra_SFT on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:00:26.162132(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of maywell/Mini_Synatra_SFT## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/Mini_Synatra_SFT on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T18:00:26.162132(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
cc6fd5be667c3e23f737fef0e15738f8c4b5de60 | Mip-splatting data for the online viewer | ZehaoYu/mip-splatting | [
"region:us"
] | 2023-12-03T18:00:56+00:00 | {} | 2023-12-04T13:12:20+00:00 | [] | [] | TAGS
#region-us
| Mip-splatting data for the online viewer | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
aaa216a409345660d48508b05d0c2e9f3ca8b3a5 |
# Dataset Card for Evaluation run of maywell/PiVoT-0.1-early
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/maywell/PiVoT-0.1-early
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [maywell/PiVoT-0.1-early](https://huggingface.co/maywell/PiVoT-0.1-early) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_maywell__PiVoT-0.1-early",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T18:08:31.894525](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__PiVoT-0.1-early/blob/main/results_2023-12-03T18-08-31.894525.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.444275966641395,
"acc_stderr": 0.013686685712261672
},
"harness|gsm8k|5": {
"acc": 0.444275966641395,
"acc_stderr": 0.013686685712261672
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_maywell__PiVoT-0.1-early | [
"region:us"
] | 2023-12-03T18:05:24+00:00 | {"pretty_name": "Evaluation run of maywell/PiVoT-0.1-early", "dataset_summary": "Dataset automatically created during the evaluation run of model [maywell/PiVoT-0.1-early](https://huggingface.co/maywell/PiVoT-0.1-early) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_maywell__PiVoT-0.1-early\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T18:08:31.894525](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__PiVoT-0.1-early/blob/main/results_2023-12-03T18-08-31.894525.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.444275966641395,\n \"acc_stderr\": 0.013686685712261672\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.444275966641395,\n \"acc_stderr\": 0.013686685712261672\n }\n}\n```", "repo_url": "https://huggingface.co/maywell/PiVoT-0.1-early", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T18_05_21.417194", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-05-21.417194.parquet"]}, {"split": "2023_12_03T18_08_31.894525", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-08-31.894525.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-08-31.894525.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T18_05_21.417194", "path": ["results_2023-12-03T18-05-21.417194.parquet"]}, {"split": "2023_12_03T18_08_31.894525", "path": ["results_2023-12-03T18-08-31.894525.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T18-08-31.894525.parquet"]}]}]} | 2023-12-03T18:08:39+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of maywell/PiVoT-0.1-early
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model maywell/PiVoT-0.1-early on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T18:08:31.894525(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of maywell/PiVoT-0.1-early",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/PiVoT-0.1-early on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:08:31.894525(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of maywell/PiVoT-0.1-early",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/PiVoT-0.1-early on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:08:31.894525(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of maywell/PiVoT-0.1-early## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/PiVoT-0.1-early on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T18:08:31.894525(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
8b9e65041e9eba406f11bc44c09e28b21d7fcd6c | # DALL-E 3 text-image pairs
## Description
This dataset contains DALL-E images posted on Reddit (mixed with other AI-generated images from Midjourney and Stable Diffusion). | SilentAntagonist/dalle3-reddit-cogvlm-captions | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-12-03T18:05:29+00:00 | {"license": "cc-by-nc-4.0"} | 2024-01-08T16:42:41+00:00 | [] | [] | TAGS
#license-cc-by-nc-4.0 #region-us
| # DALL-E 3 text-image pairs
## Description
This dataset contains DALL-E images posted on Reddit (mixed with other AI-generated images from Midjourney and Stable Diffusion). | [
"# DALL-E 3 text-image pairs",
"## Description\n\nThis dataset contains DALL-E images posted on Reddit (mixed with other AI-generated images from Midjourney and Stable Diffusion)."
] | [
"TAGS\n#license-cc-by-nc-4.0 #region-us \n",
"# DALL-E 3 text-image pairs",
"## Description\n\nThis dataset contains DALL-E images posted on Reddit (mixed with other AI-generated images from Midjourney and Stable Diffusion)."
] | [
17,
11,
36
] | [
"passage: TAGS\n#license-cc-by-nc-4.0 #region-us \n# DALL-E 3 text-image pairs## Description\n\nThis dataset contains DALL-E images posted on Reddit (mixed with other AI-generated images from Midjourney and Stable Diffusion)."
] |
519e1ed78e64e7ee649b6c3a4a522e81f09ea021 |
# Dataset Card for Evaluation run of maywell/PiVoT-0.1-Evil-a
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/maywell/PiVoT-0.1-Evil-a
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [maywell/PiVoT-0.1-Evil-a](https://huggingface.co/maywell/PiVoT-0.1-Evil-a) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_maywell__PiVoT-0.1-Evil-a",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T18:10:37.734166](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__PiVoT-0.1-Evil-a/blob/main/results_2023-12-03T18-10-37.734166.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4040940106141016,
"acc_stderr": 0.01351675297272172
},
"harness|gsm8k|5": {
"acc": 0.4040940106141016,
"acc_stderr": 0.01351675297272172
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_maywell__PiVoT-0.1-Evil-a | [
"region:us"
] | 2023-12-03T18:05:43+00:00 | {"pretty_name": "Evaluation run of maywell/PiVoT-0.1-Evil-a", "dataset_summary": "Dataset automatically created during the evaluation run of model [maywell/PiVoT-0.1-Evil-a](https://huggingface.co/maywell/PiVoT-0.1-Evil-a) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_maywell__PiVoT-0.1-Evil-a\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T18:10:37.734166](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__PiVoT-0.1-Evil-a/blob/main/results_2023-12-03T18-10-37.734166.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4040940106141016,\n \"acc_stderr\": 0.01351675297272172\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4040940106141016,\n \"acc_stderr\": 0.01351675297272172\n }\n}\n```", "repo_url": "https://huggingface.co/maywell/PiVoT-0.1-Evil-a", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T18_05_40.726563", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-05-40.726563.parquet"]}, {"split": "2023_12_03T18_10_37.734166", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-10-37.734166.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-10-37.734166.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T18_05_40.726563", "path": ["results_2023-12-03T18-05-40.726563.parquet"]}, {"split": "2023_12_03T18_10_37.734166", "path": ["results_2023-12-03T18-10-37.734166.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T18-10-37.734166.parquet"]}]}]} | 2023-12-03T18:10:44+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of maywell/PiVoT-0.1-Evil-a
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model maywell/PiVoT-0.1-Evil-a on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T18:10:37.734166(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of maywell/PiVoT-0.1-Evil-a",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/PiVoT-0.1-Evil-a on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:10:37.734166(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of maywell/PiVoT-0.1-Evil-a",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/PiVoT-0.1-Evil-a on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:10:37.734166(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of maywell/PiVoT-0.1-Evil-a## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/PiVoT-0.1-Evil-a on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T18:10:37.734166(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
68a71ff7ebed8c9d6d4a2fe5e81f5985d66e9d78 |
# Dataset Card for Evaluation run of chargoddard/loyal-piano-m7
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/chargoddard/loyal-piano-m7
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [chargoddard/loyal-piano-m7](https://huggingface.co/chargoddard/loyal-piano-m7) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_chargoddard__loyal-piano-m7",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T18:24:30.101089](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__loyal-piano-m7/blob/main/results_2023-12-03T18-24-30.101089.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5670962850644428,
"acc_stderr": 0.013647916362576054
},
"harness|gsm8k|5": {
"acc": 0.5670962850644428,
"acc_stderr": 0.013647916362576054
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_chargoddard__loyal-piano-m7 | [
"region:us"
] | 2023-12-03T18:24:32+00:00 | {"pretty_name": "Evaluation run of chargoddard/loyal-piano-m7", "dataset_summary": "Dataset automatically created during the evaluation run of model [chargoddard/loyal-piano-m7](https://huggingface.co/chargoddard/loyal-piano-m7) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chargoddard__loyal-piano-m7\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T18:24:30.101089](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__loyal-piano-m7/blob/main/results_2023-12-03T18-24-30.101089.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5670962850644428,\n \"acc_stderr\": 0.013647916362576054\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5670962850644428,\n \"acc_stderr\": 0.013647916362576054\n }\n}\n```", "repo_url": "https://huggingface.co/chargoddard/loyal-piano-m7", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T18_24_30.101089", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-24-30.101089.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-24-30.101089.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T18_24_30.101089", "path": ["results_2023-12-03T18-24-30.101089.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T18-24-30.101089.parquet"]}]}]} | 2023-12-03T18:24:36+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of chargoddard/loyal-piano-m7
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model chargoddard/loyal-piano-m7 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T18:24:30.101089(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of chargoddard/loyal-piano-m7",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/loyal-piano-m7 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:24:30.101089(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of chargoddard/loyal-piano-m7",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/loyal-piano-m7 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:24:30.101089(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
68,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of chargoddard/loyal-piano-m7## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/loyal-piano-m7 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T18:24:30.101089(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
9caa19cc3fea545c07b0e0a9fb3c6820a95e3f12 | # Dataset Card for "mscoco_simplified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | BubbleJoe/mscoco_simplified | [
"region:us"
] | 2023-12-03T18:29:52+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "restval", "path": "data/restval-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "sentids", "dtype": "int64"}, {"name": "sentences", "dtype": "string"}, {"name": "simplified", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37370437, "num_examples": 414113}, {"name": "test", "num_bytes": 2252431, "num_examples": 25010}, {"name": "restval", "num_bytes": 13747474, "num_examples": 152634}, {"name": "validation", "num_bytes": 2254719, "num_examples": 25010}], "download_size": 29875182, "dataset_size": 55625061}} | 2023-12-04T18:29:19+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "mscoco_simplified"
More Information needed | [
"# Dataset Card for \"mscoco_simplified\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"mscoco_simplified\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"mscoco_simplified\"\n\nMore Information needed"
] |
41fcd2734b6a39911bcd9028573dfe9f61cadcdc |
# Dataset Card for Evaluation run of chargoddard/llama-polyglot-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/chargoddard/llama-polyglot-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [chargoddard/llama-polyglot-13b](https://huggingface.co/chargoddard/llama-polyglot-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_chargoddard__llama-polyglot-13b",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T18:35:49.174459](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama-polyglot-13b/blob/main/results_2023-12-03T18-35-49.174459.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2259287338893101,
"acc_stderr": 0.011519098777279956
},
"harness|gsm8k|5": {
"acc": 0.2259287338893101,
"acc_stderr": 0.011519098777279956
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_chargoddard__llama-polyglot-13b | [
"region:us"
] | 2023-12-03T18:35:52+00:00 | {"pretty_name": "Evaluation run of chargoddard/llama-polyglot-13b", "dataset_summary": "Dataset automatically created during the evaluation run of model [chargoddard/llama-polyglot-13b](https://huggingface.co/chargoddard/llama-polyglot-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chargoddard__llama-polyglot-13b\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T18:35:49.174459](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama-polyglot-13b/blob/main/results_2023-12-03T18-35-49.174459.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2259287338893101,\n \"acc_stderr\": 0.011519098777279956\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2259287338893101,\n \"acc_stderr\": 0.011519098777279956\n }\n}\n```", "repo_url": "https://huggingface.co/chargoddard/llama-polyglot-13b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T18_35_49.174459", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-35-49.174459.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-35-49.174459.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T18_35_49.174459", "path": ["results_2023-12-03T18-35-49.174459.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T18-35-49.174459.parquet"]}]}]} | 2023-12-03T18:35:56+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of chargoddard/llama-polyglot-13b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model chargoddard/llama-polyglot-13b on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T18:35:49.174459(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of chargoddard/llama-polyglot-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/llama-polyglot-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:35:49.174459(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of chargoddard/llama-polyglot-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/llama-polyglot-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:35:49.174459(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of chargoddard/llama-polyglot-13b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/llama-polyglot-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T18:35:49.174459(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
7229f691fa01cfc06e74ad90c7e172f026d71875 |
# Dataset Card for Evaluation run of uukuguy/speechless-mistral-7b-dare-0.85
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/uukuguy/speechless-mistral-7b-dare-0.85
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [uukuguy/speechless-mistral-7b-dare-0.85](https://huggingface.co/uukuguy/speechless-mistral-7b-dare-0.85) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_uukuguy__speechless-mistral-7b-dare-0.85",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T18:43:10.266119](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-mistral-7b-dare-0.85/blob/main/results_2023-12-03T18-43-10.266119.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.45564821834723274,
"acc_stderr": 0.013718194542485601
},
"harness|gsm8k|5": {
"acc": 0.45564821834723274,
"acc_stderr": 0.013718194542485601
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_uukuguy__speechless-mistral-7b-dare-0.85 | [
"region:us"
] | 2023-12-03T18:43:13+00:00 | {"pretty_name": "Evaluation run of uukuguy/speechless-mistral-7b-dare-0.85", "dataset_summary": "Dataset automatically created during the evaluation run of model [uukuguy/speechless-mistral-7b-dare-0.85](https://huggingface.co/uukuguy/speechless-mistral-7b-dare-0.85) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__speechless-mistral-7b-dare-0.85\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T18:43:10.266119](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-mistral-7b-dare-0.85/blob/main/results_2023-12-03T18-43-10.266119.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.45564821834723274,\n \"acc_stderr\": 0.013718194542485601\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.45564821834723274,\n \"acc_stderr\": 0.013718194542485601\n }\n}\n```", "repo_url": "https://huggingface.co/uukuguy/speechless-mistral-7b-dare-0.85", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T18_43_10.266119", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-43-10.266119.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-43-10.266119.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T18_43_10.266119", "path": ["results_2023-12-03T18-43-10.266119.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T18-43-10.266119.parquet"]}]}]} | 2023-12-03T18:43:17+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of uukuguy/speechless-mistral-7b-dare-0.85
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model uukuguy/speechless-mistral-7b-dare-0.85 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T18:43:10.266119(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of uukuguy/speechless-mistral-7b-dare-0.85",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-mistral-7b-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:43:10.266119(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of uukuguy/speechless-mistral-7b-dare-0.85",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-mistral-7b-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:43:10.266119(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
27,
31,
176,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of uukuguy/speechless-mistral-7b-dare-0.85## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-mistral-7b-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T18:43:10.266119(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
b77510a4c25058a2addaa8d2313b79f8839ebc05 | # Dataset Card for "patched_test_f_UCH_ps_50__v2023d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | roa7n/patched_test_f_UCH_ps_50__v2023d | [
"region:us"
] | 2023-12-03T18:52:10+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 48828729, "num_examples": 110542}], "download_size": 4291765, "dataset_size": 48828729}} | 2023-12-03T18:52:13+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "patched_test_f_UCH_ps_50__v2023d"
More Information needed | [
"# Dataset Card for \"patched_test_f_UCH_ps_50__v2023d\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"patched_test_f_UCH_ps_50__v2023d\"\n\nMore Information needed"
] | [
6,
27
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"patched_test_f_UCH_ps_50__v2023d\"\n\nMore Information needed"
] |
b2b8a7baa6c058ecd2c6595648f3e02c37749cd3 |
# Dataset Card for Evaluation run of Jiayi-Pan/Tiny-Vicuna-1B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Jiayi-Pan/Tiny-Vicuna-1B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Jiayi-Pan/Tiny-Vicuna-1B](https://huggingface.co/Jiayi-Pan/Tiny-Vicuna-1B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Jiayi-Pan__Tiny-Vicuna-1B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T18:54:15.372610](https://huggingface.co/datasets/open-llm-leaderboard/details_Jiayi-Pan__Tiny-Vicuna-1B/blob/main/results_2023-12-03T18-54-15.372610.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.015163002274450341,
"acc_stderr": 0.003366022949726332
},
"harness|gsm8k|5": {
"acc": 0.015163002274450341,
"acc_stderr": 0.003366022949726332
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Jiayi-Pan__Tiny-Vicuna-1B | [
"region:us"
] | 2023-12-03T18:54:18+00:00 | {"pretty_name": "Evaluation run of Jiayi-Pan/Tiny-Vicuna-1B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Jiayi-Pan/Tiny-Vicuna-1B](https://huggingface.co/Jiayi-Pan/Tiny-Vicuna-1B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Jiayi-Pan__Tiny-Vicuna-1B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T18:54:15.372610](https://huggingface.co/datasets/open-llm-leaderboard/details_Jiayi-Pan__Tiny-Vicuna-1B/blob/main/results_2023-12-03T18-54-15.372610.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.015163002274450341,\n \"acc_stderr\": 0.003366022949726332\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.015163002274450341,\n \"acc_stderr\": 0.003366022949726332\n }\n}\n```", "repo_url": "https://huggingface.co/Jiayi-Pan/Tiny-Vicuna-1B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T18_54_15.372610", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-54-15.372610.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-54-15.372610.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T18_54_15.372610", "path": ["results_2023-12-03T18-54-15.372610.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T18-54-15.372610.parquet"]}]}]} | 2023-12-03T18:54:26+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Jiayi-Pan/Tiny-Vicuna-1B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Jiayi-Pan/Tiny-Vicuna-1B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T18:54:15.372610(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Jiayi-Pan/Tiny-Vicuna-1B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Jiayi-Pan/Tiny-Vicuna-1B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:54:15.372610(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Jiayi-Pan/Tiny-Vicuna-1B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Jiayi-Pan/Tiny-Vicuna-1B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:54:15.372610(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Jiayi-Pan/Tiny-Vicuna-1B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Jiayi-Pan/Tiny-Vicuna-1B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T18:54:15.372610(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
50fb6116a9e73836f98354cd9e1798ffd8ed0fdf |
# Dataset Card for CPPE - 5
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/Rishit-dagli/CPPE-Dataset
- **Paper:** [CPPE-5: Medical Personal Protective Equipment Dataset](https://arxiv.org/abs/2112.09569)
- **Leaderboard:** https://paperswithcode.com/sota/object-detection-on-cppe-5
- **Point of Contact:** [email protected]
### Dataset Summary
CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories.
Some features of this dataset are:
* high quality images and annotations (~4.6 bounding boxes per image)
* real-life images unlike any current such dataset
* majority of non-iconic images (allowing easy deployment to real-world environments)
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection. This task has an active leaderboard which can be found at https://paperswithcode.com/sota/object-detection-on-cppe-5. The metrics for this task are adopted from the COCO detection evaluation criteria, and include the mean Average Precision (AP) across IoU thresholds ranging from 0.50 to 0.95 at different scales.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=943x663 at 0x2373B065C18>,
'width': 943,
'height': 663,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category, with possible values including `Coverall` (0),`Face_Shield` (1),`Gloves` (2),`Goggles` (3) and `Mask` (4)
### Data Splits
The data is split into training and testing set. The training set contains 1000 images and test set 29 images.
## Dataset Creation
### Curation Rationale
From the paper:
> With CPPE-5 dataset, we hope to facilitate research and use in applications at multiple public places to autonomously identify if a PPE (Personal Protective Equipment) kit has been worn and also which part of the PPE kit has been worn. One of the main aims with this dataset was to also capture a higher ratio of non-iconic images or non-canonical perspectives [5] of the objects in this dataset. We further hope to see high use of this dataset to aid in medical scenarios which would have a huge effect
worldwide.
### Source Data
#### Initial Data Collection and Normalization
The images in the CPPE-5 dataset were collected using the following process:
* Obtain Images from Flickr: Following the object categories we identified earlier, we first download images from Flickr and save them at the "Original" size. On Flickr, images are served at multiple different sizes (Square 75, Small 240, Large 1024, X-Large 4K etc.), the "Original" size is an exact copy of the image uploaded by author.
* Extract relevant metadata: Flickr contains images each with searchable metadata, we extract the following relevant
metadata:
* A direct link to the original image on Flickr
* Width and height of the image
* Title given to the image by the author
* Date and time the image was uploaded on
* Flickr username of the author of the image
* Flickr Name of the author of the image
* Flickr profile of the author of the image
* The License image is licensed under
* MD5 hash of the original image
* Obtain Images from Google Images: Due to the reasons we mention earlier, we only collect a very small proportion
of images from Google Images. For these set of images we extract the following metadata:
* A direct link to the original image
* Width and height of the image
* MD5 hash of the original image
* Filter inappropriate images: Though very rare in the collected images, we also remove images containing inappropriate content using the safety filters on Flickr and Google Safe Search.
* Filter near-similar images: We then remove near-duplicate images in the dataset using GIST descriptors
#### Who are the source language producers?
The images for this dataset were collected from Flickr and Google Images.
### Annotations
#### Annotation process
The dataset was labelled in two phases: the first phase included labelling 416 images and the second phase included labelling 613 images. For all the images in the dataset volunteers were provided the following table:
|Item |Description |
|------------|--------------------------------------------------------------------- |
|coveralls | Coveralls are hospital gowns worn by medical professionals as in order to provide a barrier between patient and professional, these usually cover most of the exposed skin surfaces of the professional medics.|
|mask | Mask prevents airborne transmission of infections between patients and/or treating personnel by blocking the movement of pathogens (primarily bacteria and viruses) shed in respiratory droplets and aerosols into and from the wearer’s mouth and nose.|
face shield | Face shield aims to protect the wearer’s entire face (or part of it) from hazards such as flying objects and road debris, chemical splashes (in laboratories or in industry), or potentially infectious materials (in medical and laboratory environments).|
gloves | Gloves are used during medical examinations and procedures to help prevent cross-contamination between caregivers and patients.|
|goggles | Goggles, or safety glasses, are forms of protective eye wear that usually enclose or protect the area surrounding the eye in order to prevent particulates, water or chemicals from striking the eyes.|
as well as examples of: correctly labelled images, incorrectly labelled images, and not applicable images. Before the labelling task, each volunteer was provided with an exercise to verify if the volunteer was able to correctly identify categories as well as identify if an annotated image is correctly labelled, incorrectly labelled, or not applicable. The labelling process first involved two volunteers independently labelling an image from the dataset. In any of the cases that: the number of bounding boxes are different, the labels for on or more of the bounding boxes are different or two volunteer annotations are sufficiently different; a third volunteer compiles the result from the two annotations to come up with a correctly labelled image. After this step, a volunteer verifies the bounding box annotations. Following this method of labelling the dataset we ensured that all images were labelled accurately and contained exhaustive
annotations. As a result of this, our dataset consists of 1029 high-quality, majorly non-iconic, and accurately annotated images.
#### Who are the annotators?
In both the phases crowd-sourcing techniques were used with multiple volunteers labelling the dataset using the open-source tool LabelImg.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Dagli, Rishit, and Ali Mustufa Shaikh.
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{dagli2021cppe5,
title={CPPE-5: Medical Personal Protective Equipment Dataset},
author={Rishit Dagli and Ali Mustufa Shaikh},
year={2021},
eprint={2112.09569},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | yunusskeete/cppe5 | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"medical-personal-protective-equipment-detection",
"arxiv:2112.09569",
"region:us"
] | 2023-12-03T18:58:55+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["object-detection"], "task_ids": [], "paperswithcode_id": "cppe-5", "pretty_name": "CPPE - 5", "tags": ["medical-personal-protective-equipment-detection"], "dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "objects", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "float32", "length": 4}, {"name": "category", "dtype": {"class_label": {"names": {"0": "Coverall", "1": "Face_Shield", "2": "Gloves", "3": "Goggles", "4": "Mask"}}}}]}], "splits": [{"name": "train", "num_bytes": 240481257, "num_examples": 779}, {"name": "test", "num_bytes": 4172715, "num_examples": 249}], "download_size": 238482705, "dataset_size": 244653972}} | 2023-12-03T22:35:47+00:00 | [
"2112.09569"
] | [
"en"
] | TAGS
#task_categories-object-detection #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #medical-personal-protective-equipment-detection #arxiv-2112.09569 #region-us
| Dataset Card for CPPE - 5
=========================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage:
* Repository: URL
* Paper: CPPE-5: Medical Personal Protective Equipment Dataset
* Leaderboard: URL
* Point of Contact: URL@URL
### Dataset Summary
CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories.
Some features of this dataset are:
* high quality images and annotations (~4.6 bounding boxes per image)
* real-life images unlike any current such dataset
* majority of non-iconic images (allowing easy deployment to real-world environments)
### Supported Tasks and Leaderboards
* 'object-detection': The dataset can be used to train a model for Object Detection. This task has an active leaderboard which can be found at URL The metrics for this task are adopted from the COCO detection evaluation criteria, and include the mean Average Precision (AP) across IoU thresholds ranging from 0.50 to 0.95 at different scales.
### Languages
English
Dataset Structure
-----------------
### Data Instances
A data point comprises an image and its object annotations.
### Data Fields
* 'image': the image id
* 'image': 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'
* 'width': the image width
* 'height': the image height
* 'objects': a dictionary containing bounding box metadata for the objects present on the image
+ 'id': the annotation id
+ 'area': the area of the bounding box
+ 'bbox': the object's bounding box (in the coco format)
+ 'category': the object's category, with possible values including 'Coverall' (0),'Face\_Shield' (1),'Gloves' (2),'Goggles' (3) and 'Mask' (4)
### Data Splits
The data is split into training and testing set. The training set contains 1000 images and test set 29 images.
Dataset Creation
----------------
### Curation Rationale
From the paper:
>
> With CPPE-5 dataset, we hope to facilitate research and use in applications at multiple public places to autonomously identify if a PPE (Personal Protective Equipment) kit has been worn and also which part of the PPE kit has been worn. One of the main aims with this dataset was to also capture a higher ratio of non-iconic images or non-canonical perspectives [5] of the objects in this dataset. We further hope to see high use of this dataset to aid in medical scenarios which would have a huge effect
> worldwide.
>
>
>
### Source Data
#### Initial Data Collection and Normalization
The images in the CPPE-5 dataset were collected using the following process:
* Obtain Images from Flickr: Following the object categories we identified earlier, we first download images from Flickr and save them at the "Original" size. On Flickr, images are served at multiple different sizes (Square 75, Small 240, Large 1024, X-Large 4K etc.), the "Original" size is an exact copy of the image uploaded by author.
* Extract relevant metadata: Flickr contains images each with searchable metadata, we extract the following relevant
metadata:
+ A direct link to the original image on Flickr
+ Width and height of the image
+ Title given to the image by the author
+ Date and time the image was uploaded on
+ Flickr username of the author of the image
+ Flickr Name of the author of the image
+ Flickr profile of the author of the image
+ The License image is licensed under
+ MD5 hash of the original image
* Obtain Images from Google Images: Due to the reasons we mention earlier, we only collect a very small proportion
of images from Google Images. For these set of images we extract the following metadata:
+ A direct link to the original image
+ Width and height of the image
+ MD5 hash of the original image
* Filter inappropriate images: Though very rare in the collected images, we also remove images containing inappropriate content using the safety filters on Flickr and Google Safe Search.
* Filter near-similar images: We then remove near-duplicate images in the dataset using GIST descriptors
#### Who are the source language producers?
The images for this dataset were collected from Flickr and Google Images.
### Annotations
#### Annotation process
The dataset was labelled in two phases: the first phase included labelling 416 images and the second phase included labelling 613 images. For all the images in the dataset volunteers were provided the following table:
as well as examples of: correctly labelled images, incorrectly labelled images, and not applicable images. Before the labelling task, each volunteer was provided with an exercise to verify if the volunteer was able to correctly identify categories as well as identify if an annotated image is correctly labelled, incorrectly labelled, or not applicable. The labelling process first involved two volunteers independently labelling an image from the dataset. In any of the cases that: the number of bounding boxes are different, the labels for on or more of the bounding boxes are different or two volunteer annotations are sufficiently different; a third volunteer compiles the result from the two annotations to come up with a correctly labelled image. After this step, a volunteer verifies the bounding box annotations. Following this method of labelling the dataset we ensured that all images were labelled accurately and contained exhaustive
annotations. As a result of this, our dataset consists of 1029 high-quality, majorly non-iconic, and accurately annotated images.
#### Who are the annotators?
In both the phases crowd-sourcing techniques were used with multiple volunteers labelling the dataset using the open-source tool LabelImg.
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
Dagli, Rishit, and Ali Mustufa Shaikh.
### Licensing Information
### Contributions
Thanks to @mariosasko for adding this dataset.
| [
"### Dataset Summary\n\n\nCPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories.\n\n\nSome features of this dataset are:\n\n\n* high quality images and annotations (~4.6 bounding boxes per image)\n* real-life images unlike any current such dataset\n* majority of non-iconic images (allowing easy deployment to real-world environments)",
"### Supported Tasks and Leaderboards\n\n\n* 'object-detection': The dataset can be used to train a model for Object Detection. This task has an active leaderboard which can be found at URL The metrics for this task are adopted from the COCO detection evaluation criteria, and include the mean Average Precision (AP) across IoU thresholds ranging from 0.50 to 0.95 at different scales.",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA data point comprises an image and its object annotations.",
"### Data Fields\n\n\n* 'image': the image id\n* 'image': 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n* 'width': the image width\n* 'height': the image height\n* 'objects': a dictionary containing bounding box metadata for the objects present on the image\n\t+ 'id': the annotation id\n\t+ 'area': the area of the bounding box\n\t+ 'bbox': the object's bounding box (in the coco format)\n\t+ 'category': the object's category, with possible values including 'Coverall' (0),'Face\\_Shield' (1),'Gloves' (2),'Goggles' (3) and 'Mask' (4)",
"### Data Splits\n\n\nThe data is split into training and testing set. The training set contains 1000 images and test set 29 images.\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nFrom the paper:\n\n\n\n> \n> With CPPE-5 dataset, we hope to facilitate research and use in applications at multiple public places to autonomously identify if a PPE (Personal Protective Equipment) kit has been worn and also which part of the PPE kit has been worn. One of the main aims with this dataset was to also capture a higher ratio of non-iconic images or non-canonical perspectives [5] of the objects in this dataset. We further hope to see high use of this dataset to aid in medical scenarios which would have a huge effect\n> worldwide.\n> \n> \n>",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe images in the CPPE-5 dataset were collected using the following process:\n\n\n* Obtain Images from Flickr: Following the object categories we identified earlier, we first download images from Flickr and save them at the \"Original\" size. On Flickr, images are served at multiple different sizes (Square 75, Small 240, Large 1024, X-Large 4K etc.), the \"Original\" size is an exact copy of the image uploaded by author.\n* Extract relevant metadata: Flickr contains images each with searchable metadata, we extract the following relevant\nmetadata:\n\t+ A direct link to the original image on Flickr\n\t+ Width and height of the image\n\t+ Title given to the image by the author\n\t+ Date and time the image was uploaded on\n\t+ Flickr username of the author of the image\n\t+ Flickr Name of the author of the image\n\t+ Flickr profile of the author of the image\n\t+ The License image is licensed under\n\t+ MD5 hash of the original image\n* Obtain Images from Google Images: Due to the reasons we mention earlier, we only collect a very small proportion\nof images from Google Images. For these set of images we extract the following metadata:\n\t+ A direct link to the original image\n\t+ Width and height of the image\n\t+ MD5 hash of the original image\n* Filter inappropriate images: Though very rare in the collected images, we also remove images containing inappropriate content using the safety filters on Flickr and Google Safe Search.\n* Filter near-similar images: We then remove near-duplicate images in the dataset using GIST descriptors",
"#### Who are the source language producers?\n\n\nThe images for this dataset were collected from Flickr and Google Images.",
"### Annotations",
"#### Annotation process\n\n\nThe dataset was labelled in two phases: the first phase included labelling 416 images and the second phase included labelling 613 images. For all the images in the dataset volunteers were provided the following table:\n\n\n\nas well as examples of: correctly labelled images, incorrectly labelled images, and not applicable images. Before the labelling task, each volunteer was provided with an exercise to verify if the volunteer was able to correctly identify categories as well as identify if an annotated image is correctly labelled, incorrectly labelled, or not applicable. The labelling process first involved two volunteers independently labelling an image from the dataset. In any of the cases that: the number of bounding boxes are different, the labels for on or more of the bounding boxes are different or two volunteer annotations are sufficiently different; a third volunteer compiles the result from the two annotations to come up with a correctly labelled image. After this step, a volunteer verifies the bounding box annotations. Following this method of labelling the dataset we ensured that all images were labelled accurately and contained exhaustive\nannotations. As a result of this, our dataset consists of 1029 high-quality, majorly non-iconic, and accurately annotated images.",
"#### Who are the annotators?\n\n\nIn both the phases crowd-sourcing techniques were used with multiple volunteers labelling the dataset using the open-source tool LabelImg.",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nDagli, Rishit, and Ali Mustufa Shaikh.",
"### Licensing Information",
"### Contributions\n\n\nThanks to @mariosasko for adding this dataset."
] | [
"TAGS\n#task_categories-object-detection #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #medical-personal-protective-equipment-detection #arxiv-2112.09569 #region-us \n",
"### Dataset Summary\n\n\nCPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories.\n\n\nSome features of this dataset are:\n\n\n* high quality images and annotations (~4.6 bounding boxes per image)\n* real-life images unlike any current such dataset\n* majority of non-iconic images (allowing easy deployment to real-world environments)",
"### Supported Tasks and Leaderboards\n\n\n* 'object-detection': The dataset can be used to train a model for Object Detection. This task has an active leaderboard which can be found at URL The metrics for this task are adopted from the COCO detection evaluation criteria, and include the mean Average Precision (AP) across IoU thresholds ranging from 0.50 to 0.95 at different scales.",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA data point comprises an image and its object annotations.",
"### Data Fields\n\n\n* 'image': the image id\n* 'image': 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n* 'width': the image width\n* 'height': the image height\n* 'objects': a dictionary containing bounding box metadata for the objects present on the image\n\t+ 'id': the annotation id\n\t+ 'area': the area of the bounding box\n\t+ 'bbox': the object's bounding box (in the coco format)\n\t+ 'category': the object's category, with possible values including 'Coverall' (0),'Face\\_Shield' (1),'Gloves' (2),'Goggles' (3) and 'Mask' (4)",
"### Data Splits\n\n\nThe data is split into training and testing set. The training set contains 1000 images and test set 29 images.\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nFrom the paper:\n\n\n\n> \n> With CPPE-5 dataset, we hope to facilitate research and use in applications at multiple public places to autonomously identify if a PPE (Personal Protective Equipment) kit has been worn and also which part of the PPE kit has been worn. One of the main aims with this dataset was to also capture a higher ratio of non-iconic images or non-canonical perspectives [5] of the objects in this dataset. We further hope to see high use of this dataset to aid in medical scenarios which would have a huge effect\n> worldwide.\n> \n> \n>",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe images in the CPPE-5 dataset were collected using the following process:\n\n\n* Obtain Images from Flickr: Following the object categories we identified earlier, we first download images from Flickr and save them at the \"Original\" size. On Flickr, images are served at multiple different sizes (Square 75, Small 240, Large 1024, X-Large 4K etc.), the \"Original\" size is an exact copy of the image uploaded by author.\n* Extract relevant metadata: Flickr contains images each with searchable metadata, we extract the following relevant\nmetadata:\n\t+ A direct link to the original image on Flickr\n\t+ Width and height of the image\n\t+ Title given to the image by the author\n\t+ Date and time the image was uploaded on\n\t+ Flickr username of the author of the image\n\t+ Flickr Name of the author of the image\n\t+ Flickr profile of the author of the image\n\t+ The License image is licensed under\n\t+ MD5 hash of the original image\n* Obtain Images from Google Images: Due to the reasons we mention earlier, we only collect a very small proportion\nof images from Google Images. For these set of images we extract the following metadata:\n\t+ A direct link to the original image\n\t+ Width and height of the image\n\t+ MD5 hash of the original image\n* Filter inappropriate images: Though very rare in the collected images, we also remove images containing inappropriate content using the safety filters on Flickr and Google Safe Search.\n* Filter near-similar images: We then remove near-duplicate images in the dataset using GIST descriptors",
"#### Who are the source language producers?\n\n\nThe images for this dataset were collected from Flickr and Google Images.",
"### Annotations",
"#### Annotation process\n\n\nThe dataset was labelled in two phases: the first phase included labelling 416 images and the second phase included labelling 613 images. For all the images in the dataset volunteers were provided the following table:\n\n\n\nas well as examples of: correctly labelled images, incorrectly labelled images, and not applicable images. Before the labelling task, each volunteer was provided with an exercise to verify if the volunteer was able to correctly identify categories as well as identify if an annotated image is correctly labelled, incorrectly labelled, or not applicable. The labelling process first involved two volunteers independently labelling an image from the dataset. In any of the cases that: the number of bounding boxes are different, the labels for on or more of the bounding boxes are different or two volunteer annotations are sufficiently different; a third volunteer compiles the result from the two annotations to come up with a correctly labelled image. After this step, a volunteer verifies the bounding box annotations. Following this method of labelling the dataset we ensured that all images were labelled accurately and contained exhaustive\nannotations. As a result of this, our dataset consists of 1029 high-quality, majorly non-iconic, and accurately annotated images.",
"#### Who are the annotators?\n\n\nIn both the phases crowd-sourcing techniques were used with multiple volunteers labelling the dataset using the open-source tool LabelImg.",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nDagli, Rishit, and Ali Mustufa Shaikh.",
"### Licensing Information",
"### Contributions\n\n\nThanks to @mariosasko for adding this dataset."
] | [
100,
125,
98,
12,
20,
273,
34,
141,
4,
349,
25,
5,
297,
39,
18,
7,
8,
14,
19,
6,
17
] | [
"passage: TAGS\n#task_categories-object-detection #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #medical-personal-protective-equipment-detection #arxiv-2112.09569 #region-us \n### Dataset Summary\n\n\nCPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories.\n\n\nSome features of this dataset are:\n\n\n* high quality images and annotations (~4.6 bounding boxes per image)\n* real-life images unlike any current such dataset\n* majority of non-iconic images (allowing easy deployment to real-world environments)### Supported Tasks and Leaderboards\n\n\n* 'object-detection': The dataset can be used to train a model for Object Detection. This task has an active leaderboard which can be found at URL The metrics for this task are adopted from the COCO detection evaluation criteria, and include the mean Average Precision (AP) across IoU thresholds ranging from 0.50 to 0.95 at different scales.### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA data point comprises an image and its object annotations.",
"passage: ### Data Fields\n\n\n* 'image': the image id\n* 'image': 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n* 'width': the image width\n* 'height': the image height\n* 'objects': a dictionary containing bounding box metadata for the objects present on the image\n\t+ 'id': the annotation id\n\t+ 'area': the area of the bounding box\n\t+ 'bbox': the object's bounding box (in the coco format)\n\t+ 'category': the object's category, with possible values including 'Coverall' (0),'Face\\_Shield' (1),'Gloves' (2),'Goggles' (3) and 'Mask' (4)### Data Splits\n\n\nThe data is split into training and testing set. The training set contains 1000 images and test set 29 images.\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nFrom the paper:\n\n\n\n> \n> With CPPE-5 dataset, we hope to facilitate research and use in applications at multiple public places to autonomously identify if a PPE (Personal Protective Equipment) kit has been worn and also which part of the PPE kit has been worn. One of the main aims with this dataset was to also capture a higher ratio of non-iconic images or non-canonical perspectives [5] of the objects in this dataset. We further hope to see high use of this dataset to aid in medical scenarios which would have a huge effect\n> worldwide.\n> \n> \n>### Source Data"
] |
71b5a3936bee1d8709baf7d4df3f97620ee8be1d |
# Dataset Card for Evaluation run of yeen214/llama2_7b_merge_orcafamily
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/yeen214/llama2_7b_merge_orcafamily
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [yeen214/llama2_7b_merge_orcafamily](https://huggingface.co/yeen214/llama2_7b_merge_orcafamily) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_yeen214__llama2_7b_merge_orcafamily",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T18:59:14.919755](https://huggingface.co/datasets/open-llm-leaderboard/details_yeen214__llama2_7b_merge_orcafamily/blob/main/results_2023-12-03T18-59-14.919755.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2312357846853677,
"acc_stderr": 0.0116135875031666
},
"harness|gsm8k|5": {
"acc": 0.2312357846853677,
"acc_stderr": 0.0116135875031666
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_yeen214__llama2_7b_merge_orcafamily | [
"region:us"
] | 2023-12-03T18:59:17+00:00 | {"pretty_name": "Evaluation run of yeen214/llama2_7b_merge_orcafamily", "dataset_summary": "Dataset automatically created during the evaluation run of model [yeen214/llama2_7b_merge_orcafamily](https://huggingface.co/yeen214/llama2_7b_merge_orcafamily) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_yeen214__llama2_7b_merge_orcafamily\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T18:59:14.919755](https://huggingface.co/datasets/open-llm-leaderboard/details_yeen214__llama2_7b_merge_orcafamily/blob/main/results_2023-12-03T18-59-14.919755.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2312357846853677,\n \"acc_stderr\": 0.0116135875031666\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2312357846853677,\n \"acc_stderr\": 0.0116135875031666\n }\n}\n```", "repo_url": "https://huggingface.co/yeen214/llama2_7b_merge_orcafamily", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T18_59_14.919755", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-59-14.919755.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T18-59-14.919755.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T18_59_14.919755", "path": ["results_2023-12-03T18-59-14.919755.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T18-59-14.919755.parquet"]}]}]} | 2023-12-03T18:59:22+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of yeen214/llama2_7b_merge_orcafamily
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model yeen214/llama2_7b_merge_orcafamily on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T18:59:14.919755(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of yeen214/llama2_7b_merge_orcafamily",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeen214/llama2_7b_merge_orcafamily on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:59:14.919755(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of yeen214/llama2_7b_merge_orcafamily",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeen214/llama2_7b_merge_orcafamily on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T18:59:14.919755(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
26,
31,
175,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of yeen214/llama2_7b_merge_orcafamily## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeen214/llama2_7b_merge_orcafamily on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T18:59:14.919755(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
c4c469c08f6abb5f93e1d325b92847e94ea580dd |
# Dataset Card for Evaluation run of prithivida/Asimov-7B-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/prithivida/Asimov-7B-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [prithivida/Asimov-7B-v2](https://huggingface.co/prithivida/Asimov-7B-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_prithivida__Asimov-7B-v2",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T19:02:27.334666](https://huggingface.co/datasets/open-llm-leaderboard/details_prithivida__Asimov-7B-v2/blob/main/results_2023-12-03T19-02-27.334666.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.10917361637604246,
"acc_stderr": 0.008590089300511155
},
"harness|gsm8k|5": {
"acc": 0.10917361637604246,
"acc_stderr": 0.008590089300511155
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_prithivida__Asimov-7B-v2 | [
"region:us"
] | 2023-12-03T19:02:44+00:00 | {"pretty_name": "Evaluation run of prithivida/Asimov-7B-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [prithivida/Asimov-7B-v2](https://huggingface.co/prithivida/Asimov-7B-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_prithivida__Asimov-7B-v2\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T19:02:27.334666](https://huggingface.co/datasets/open-llm-leaderboard/details_prithivida__Asimov-7B-v2/blob/main/results_2023-12-03T19-02-27.334666.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.10917361637604246,\n \"acc_stderr\": 0.008590089300511155\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10917361637604246,\n \"acc_stderr\": 0.008590089300511155\n }\n}\n```", "repo_url": "https://huggingface.co/prithivida/Asimov-7B-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T19_02_27.334666", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-02-27.334666.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-02-27.334666.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T19_02_27.334666", "path": ["results_2023-12-03T19-02-27.334666.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T19-02-27.334666.parquet"]}]}]} | 2023-12-03T19:03:16+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of prithivida/Asimov-7B-v2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model prithivida/Asimov-7B-v2 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T19:02:27.334666(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of prithivida/Asimov-7B-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model prithivida/Asimov-7B-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:02:27.334666(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of prithivida/Asimov-7B-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model prithivida/Asimov-7B-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:02:27.334666(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
21,
31,
170,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of prithivida/Asimov-7B-v2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model prithivida/Asimov-7B-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T19:02:27.334666(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
2f5e7df7b9513a0cd3186624250b0c246d172251 |
# Dataset Card for Evaluation run of uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85](https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_uukuguy__speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T19:04:28.043244](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85/blob/main/results_2023-12-03T19-04-28.043244.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.40333586050037906,
"acc_stderr": 0.013512654781814695
},
"harness|gsm8k|5": {
"acc": 0.40333586050037906,
"acc_stderr": 0.013512654781814695
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_uukuguy__speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85 | [
"region:us"
] | 2023-12-03T19:04:33+00:00 | {"pretty_name": "Evaluation run of uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85", "dataset_summary": "Dataset automatically created during the evaluation run of model [uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85](https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T19:04:28.043244](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85/blob/main/results_2023-12-03T19-04-28.043244.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.40333586050037906,\n \"acc_stderr\": 0.013512654781814695\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.40333586050037906,\n \"acc_stderr\": 0.013512654781814695\n }\n}\n```", "repo_url": "https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T19_04_28.043244", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-04-28.043244.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-04-28.043244.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T19_04_28.043244", "path": ["results_2023-12-03T19-04-28.043244.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T19-04-28.043244.parquet"]}]}]} | 2023-12-03T19:05:01+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T19:04:28.043244(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:04:28.043244(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:04:28.043244(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
41,
31,
190,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T19:04:28.043244(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
b3598c8c5831023f549f803b43ba408961aa4f85 | # Dataset Card for "patched_test_f_SPOUT_ps_50__v2023d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | roa7n/patched_test_f_SPOUT_ps_50__v2023d | [
"region:us"
] | 2023-12-03T19:06:18+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1947829, "num_examples": 5816}], "download_size": 187258, "dataset_size": 1947829}} | 2023-12-03T19:06:21+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "patched_test_f_SPOUT_ps_50__v2023d"
More Information needed | [
"# Dataset Card for \"patched_test_f_SPOUT_ps_50__v2023d\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"patched_test_f_SPOUT_ps_50__v2023d\"\n\nMore Information needed"
] | [
6,
28
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"patched_test_f_SPOUT_ps_50__v2023d\"\n\nMore Information needed"
] |
1e74e512f8d86fa696f9b472a4dfc59b9c5cc7d4 |
# Dataset Card for Evaluation run of HuggingFaceH4/mistral-7b-sft-beta
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_HuggingFaceH4__mistral-7b-sft-beta",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T19:08:18.030621](https://huggingface.co/datasets/open-llm-leaderboard/details_HuggingFaceH4__mistral-7b-sft-beta/blob/main/results_2023-12-03T19-08-18.030621.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.3646702047005307,
"acc_stderr": 0.013258428375662245
},
"harness|gsm8k|5": {
"acc": 0.3646702047005307,
"acc_stderr": 0.013258428375662245
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_HuggingFaceH4__mistral-7b-sft-beta | [
"region:us"
] | 2023-12-03T19:08:20+00:00 | {"pretty_name": "Evaluation run of HuggingFaceH4/mistral-7b-sft-beta", "dataset_summary": "Dataset automatically created during the evaluation run of model [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_HuggingFaceH4__mistral-7b-sft-beta\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T19:08:18.030621](https://huggingface.co/datasets/open-llm-leaderboard/details_HuggingFaceH4__mistral-7b-sft-beta/blob/main/results_2023-12-03T19-08-18.030621.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3646702047005307,\n \"acc_stderr\": 0.013258428375662245\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3646702047005307,\n \"acc_stderr\": 0.013258428375662245\n }\n}\n```", "repo_url": "https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T19_08_18.030621", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-08-18.030621.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-08-18.030621.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T19_08_18.030621", "path": ["results_2023-12-03T19-08-18.030621.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T19-08-18.030621.parquet"]}]}]} | 2023-12-03T19:08:24+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of HuggingFaceH4/mistral-7b-sft-beta
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model HuggingFaceH4/mistral-7b-sft-beta on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T19:08:18.030621(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of HuggingFaceH4/mistral-7b-sft-beta",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model HuggingFaceH4/mistral-7b-sft-beta on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:08:18.030621(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of HuggingFaceH4/mistral-7b-sft-beta",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model HuggingFaceH4/mistral-7b-sft-beta on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:08:18.030621(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
24,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of HuggingFaceH4/mistral-7b-sft-beta## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model HuggingFaceH4/mistral-7b-sft-beta on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T19:08:18.030621(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
09e383a868f2cda43b224092af9f8ac28d7f86c7 |
# Dataset Card for Evaluation run of Walmart-the-bag/MysticFusion-13B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Walmart-the-bag/MysticFusion-13B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Walmart-the-bag/MysticFusion-13B](https://huggingface.co/Walmart-the-bag/MysticFusion-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Walmart-the-bag__MysticFusion-13B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T19:14:35.382111](https://huggingface.co/datasets/open-llm-leaderboard/details_Walmart-the-bag__MysticFusion-13B/blob/main/results_2023-12-03T19-14-35.382111.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.24791508718726307,
"acc_stderr": 0.011893980214826166
},
"harness|gsm8k|5": {
"acc": 0.24791508718726307,
"acc_stderr": 0.011893980214826166
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Walmart-the-bag__MysticFusion-13B | [
"region:us"
] | 2023-12-03T19:14:38+00:00 | {"pretty_name": "Evaluation run of Walmart-the-bag/MysticFusion-13B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Walmart-the-bag/MysticFusion-13B](https://huggingface.co/Walmart-the-bag/MysticFusion-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Walmart-the-bag__MysticFusion-13B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T19:14:35.382111](https://huggingface.co/datasets/open-llm-leaderboard/details_Walmart-the-bag__MysticFusion-13B/blob/main/results_2023-12-03T19-14-35.382111.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.24791508718726307,\n \"acc_stderr\": 0.011893980214826166\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.24791508718726307,\n \"acc_stderr\": 0.011893980214826166\n }\n}\n```", "repo_url": "https://huggingface.co/Walmart-the-bag/MysticFusion-13B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T19_14_35.382111", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-14-35.382111.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-14-35.382111.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T19_14_35.382111", "path": ["results_2023-12-03T19-14-35.382111.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T19-14-35.382111.parquet"]}]}]} | 2023-12-03T19:14:43+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Walmart-the-bag/MysticFusion-13B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Walmart-the-bag/MysticFusion-13B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T19:14:35.382111(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Walmart-the-bag/MysticFusion-13B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Walmart-the-bag/MysticFusion-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:14:35.382111(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Walmart-the-bag/MysticFusion-13B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Walmart-the-bag/MysticFusion-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:14:35.382111(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Walmart-the-bag/MysticFusion-13B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Walmart-the-bag/MysticFusion-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T19:14:35.382111(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
3907ac940c10e3ad1bd13413fdec7d5ba683c4b3 | A monolingual corpus of the Mari language in various genres, containing over 20 million word occurrences.
The presented genres:
|Genre|Russian|English|
|:-----|:-------|:-------|
|мутер|словарь|dictionary|
|газетысе увер|газетные новости|periodical news|
|прозо|проза|prose|
|фольклор|фольклор|folklore|
|публицистике|публицистика|publicistic literature|
|поэзий|поэзия|poetry|
|трагикомедий|трагикомедия|tragicomedy|
|пьесе|пьеса|play|
|драме|драма|drama|
|комедий-водевиль|водевиль|vaudeville|
|комедий|комедия|comedy|
|йоча ойлымаш-влак|детские рассказы|сhildren's stories|
|сценарий|сценарий|scenario|
|йомак|сказка|fairytale|
|Библий|Библия|Bible|
|повесть|повесть|novel|
|ойлымаш-влак|рассказы|stories|
|ойлымаш ден йомак-влак|рассказы и сказки|stories and fairytales|
|автор нерген|об авторе|about the author|
```
@inproceedings{
title={Mari monolingual corpus},
author={Andrei Chemyshev, Gennadii Sabantsev, Nadezhda Timofeeva, Vasilii Semenov},
year={2023}
}
``` | mari-lab/mari-monolingual-corpus | [
"language:mhr",
"region:us"
] | 2023-12-03T19:19:07+00:00 | {"language": ["mhr"]} | 2023-12-06T07:30:53+00:00 | [] | [
"mhr"
] | TAGS
#language-Eastern Mari #region-us
| A monolingual corpus of the Mari language in various genres, containing over 20 million word occurrences.
The presented genres:
| [] | [
"TAGS\n#language-Eastern Mari #region-us \n"
] | [
13
] | [
"passage: TAGS\n#language-Eastern Mari #region-us \n"
] |
bfaee55948719d4b07943a0acb5b9b06642e1ba9 |
# Dataset Card for Evaluation run of xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128](https://huggingface.co/xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_xxyyy123__1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T19:24:48.526699](https://huggingface.co/datasets/open-llm-leaderboard/details_xxyyy123__1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128/blob/main/results_2023-12-03T19-24-48.526699.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.40181956027293403,
"acc_stderr": 0.013504357787494032
},
"harness|gsm8k|5": {
"acc": 0.40181956027293403,
"acc_stderr": 0.013504357787494032
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_xxyyy123__1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128 | [
"region:us"
] | 2023-12-03T19:24:51+00:00 | {"pretty_name": "Evaluation run of xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128", "dataset_summary": "Dataset automatically created during the evaluation run of model [xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128](https://huggingface.co/xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_xxyyy123__1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T19:24:48.526699](https://huggingface.co/datasets/open-llm-leaderboard/details_xxyyy123__1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128/blob/main/results_2023-12-03T19-24-48.526699.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.40181956027293403,\n \"acc_stderr\": 0.013504357787494032\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.40181956027293403,\n \"acc_stderr\": 0.013504357787494032\n }\n}\n```", "repo_url": "https://huggingface.co/xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T19_24_48.526699", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-24-48.526699.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-24-48.526699.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T19_24_48.526699", "path": ["results_2023-12-03T19-24-48.526699.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T19-24-48.526699.parquet"]}]}]} | 2023-12-03T19:24:56+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T19:24:48.526699(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:24:48.526699(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:24:48.526699(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
41,
31,
190,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model xxyyy123/1701221123_Ads_Mistral7B-slimorca_all-Lqv-r4b128 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T19:24:48.526699(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
494262e90687e541f585b342484270dc6639d2b7 |
# Dataset Card for Evaluation run of Sao10K/Stheno-V2-Delta-fp16
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Sao10K/Stheno-V2-Delta-fp16
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Sao10K/Stheno-V2-Delta-fp16](https://huggingface.co/Sao10K/Stheno-V2-Delta-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Sao10K__Stheno-V2-Delta-fp16",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T19:27:58.117537](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Stheno-V2-Delta-fp16/blob/main/results_2023-12-03T19-27-58.117537.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.1281273692191054,
"acc_stderr": 0.009206398549980031
},
"harness|gsm8k|5": {
"acc": 0.1281273692191054,
"acc_stderr": 0.009206398549980031
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Sao10K__Stheno-v2-Delta-fp16 | [
"region:us"
] | 2023-12-03T19:27:55+00:00 | {"pretty_name": "Evaluation run of Sao10K/Stheno-V2-Delta-fp16", "dataset_summary": "Dataset automatically created during the evaluation run of model [Sao10K/Stheno-V2-Delta-fp16](https://huggingface.co/Sao10K/Stheno-V2-Delta-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Sao10K__Stheno-V2-Delta-fp16\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T19:27:58.117537](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Stheno-V2-Delta-fp16/blob/main/results_2023-12-03T19-27-58.117537.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.1281273692191054,\n \"acc_stderr\": 0.009206398549980031\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1281273692191054,\n \"acc_stderr\": 0.009206398549980031\n }\n}\n```", "repo_url": "https://huggingface.co/Sao10K/Stheno-V2-Delta-fp16", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T19_27_52.761303", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-27-52.761303.parquet"]}, {"split": "2023_12_03T19_27_58.117537", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-27-58.117537.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-27-58.117537.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T19_27_52.761303", "path": ["results_2023-12-03T19-27-52.761303.parquet"]}, {"split": "2023_12_03T19_27_58.117537", "path": ["results_2023-12-03T19-27-58.117537.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T19-27-58.117537.parquet"]}]}]} | 2023-12-03T19:28:05+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Sao10K/Stheno-V2-Delta-fp16
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Sao10K/Stheno-V2-Delta-fp16 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T19:27:58.117537(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Sao10K/Stheno-V2-Delta-fp16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Sao10K/Stheno-V2-Delta-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:27:58.117537(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Sao10K/Stheno-V2-Delta-fp16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Sao10K/Stheno-V2-Delta-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:27:58.117537(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
25,
31,
174,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Sao10K/Stheno-V2-Delta-fp16## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Sao10K/Stheno-V2-Delta-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T19:27:58.117537(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
2a8e22cfaa7c84e26e796327650b9fab3e365dcb |
# Dataset Card for Evaluation run of TigerResearch/tigerbot-70b-chat-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TigerResearch/tigerbot-70b-chat-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TigerResearch/tigerbot-70b-chat-v2](https://huggingface.co/TigerResearch/tigerbot-70b-chat-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TigerResearch__tigerbot-70b-chat-v2",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T19:49:28.607011](https://huggingface.co/datasets/open-llm-leaderboard/details_TigerResearch__tigerbot-70b-chat-v2/blob/main/results_2023-12-03T19-49-28.607011.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5435936315390447,
"acc_stderr": 0.013720038270485332
},
"harness|gsm8k|5": {
"acc": 0.5435936315390447,
"acc_stderr": 0.013720038270485332
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_TigerResearch__tigerbot-70b-chat-v2 | [
"region:us"
] | 2023-12-03T19:40:07+00:00 | {"pretty_name": "Evaluation run of TigerResearch/tigerbot-70b-chat-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [TigerResearch/tigerbot-70b-chat-v2](https://huggingface.co/TigerResearch/tigerbot-70b-chat-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TigerResearch__tigerbot-70b-chat-v2\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T19:49:28.607011](https://huggingface.co/datasets/open-llm-leaderboard/details_TigerResearch__tigerbot-70b-chat-v2/blob/main/results_2023-12-03T19-49-28.607011.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5435936315390447,\n \"acc_stderr\": 0.013720038270485332\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5435936315390447,\n \"acc_stderr\": 0.013720038270485332\n }\n}\n```", "repo_url": "https://huggingface.co/TigerResearch/tigerbot-70b-chat-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T19_40_05.038065", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-40-05.038065.parquet"]}, {"split": "2023_12_03T19_49_28.607011", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-49-28.607011.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-49-28.607011.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T19_40_05.038065", "path": ["results_2023-12-03T19-40-05.038065.parquet"]}, {"split": "2023_12_03T19_49_28.607011", "path": ["results_2023-12-03T19-49-28.607011.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T19-49-28.607011.parquet"]}]}]} | 2023-12-03T19:49:35+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of TigerResearch/tigerbot-70b-chat-v2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TigerResearch/tigerbot-70b-chat-v2 on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T19:49:28.607011(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of TigerResearch/tigerbot-70b-chat-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TigerResearch/tigerbot-70b-chat-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:49:28.607011(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TigerResearch/tigerbot-70b-chat-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TigerResearch/tigerbot-70b-chat-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:49:28.607011(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TigerResearch/tigerbot-70b-chat-v2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TigerResearch/tigerbot-70b-chat-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T19:49:28.607011(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
7e085582a2fed98cc0c635dbba04dcda372b5455 |
# Dataset Card for Evaluation run of chargoddard/Yi-34B-Llama
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/chargoddard/Yi-34B-Llama
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_chargoddard__Yi-34B-Llama",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T19:49:20.371477](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__Yi-34B-Llama/blob/main/results_2023-12-03T19-49-20.371477.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6080363912054587,
"acc_stderr": 0.013447140886023824
},
"harness|gsm8k|5": {
"acc": 0.6080363912054587,
"acc_stderr": 0.013447140886023824
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_chargoddard__Yi-34B-Llama | [
"region:us"
] | 2023-12-03T19:49:22+00:00 | {"pretty_name": "Evaluation run of chargoddard/Yi-34B-Llama", "dataset_summary": "Dataset automatically created during the evaluation run of model [chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chargoddard__Yi-34B-Llama\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T19:49:20.371477](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__Yi-34B-Llama/blob/main/results_2023-12-03T19-49-20.371477.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6080363912054587,\n \"acc_stderr\": 0.013447140886023824\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6080363912054587,\n \"acc_stderr\": 0.013447140886023824\n }\n}\n```", "repo_url": "https://huggingface.co/chargoddard/Yi-34B-Llama", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T19_49_20.371477", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-49-20.371477.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-49-20.371477.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T19_49_20.371477", "path": ["results_2023-12-03T19-49-20.371477.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T19-49-20.371477.parquet"]}]}]} | 2023-12-03T19:49:27+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of chargoddard/Yi-34B-Llama
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model chargoddard/Yi-34B-Llama on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T19:49:20.371477(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of chargoddard/Yi-34B-Llama",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/Yi-34B-Llama on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:49:20.371477(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of chargoddard/Yi-34B-Llama",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/Yi-34B-Llama on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:49:20.371477(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
21,
31,
170,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of chargoddard/Yi-34B-Llama## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/Yi-34B-Llama on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T19:49:20.371477(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
5f839b1f368a76b161028bb9edff055db34022b2 |
This is the **OSS-Instruct** dataset generated by `gpt-3.5-turbo-1106` developed by OpenAI. Please pay attention to OpenAI's usage policy when adopting this dataset: https://openai.com/policies/usage-policies.
| ise-uiuc/Magicoder-OSS-Instruct-75K | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:10K<n<100K",
"license:mit",
"region:us"
] | 2023-12-03T20:04:53+00:00 | {"license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "conversational"]} | 2023-12-04T10:35:04+00:00 | [] | [] | TAGS
#task_categories-text-generation #task_categories-conversational #size_categories-10K<n<100K #license-mit #region-us
|
This is the OSS-Instruct dataset generated by 'gpt-3.5-turbo-1106' developed by OpenAI. Please pay attention to OpenAI's usage policy when adopting this dataset: URL
| [] | [
"TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-10K<n<100K #license-mit #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-10K<n<100K #license-mit #region-us \n"
] |
b0079beaa0361d82412520b873715bee59cc7dd4 |
A decontaminated version of [evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1). Decontamination is done in the same way as StarCoder ([bigcode decontamination process](https://github.com/bigcode-project/bigcode-dataset/tree/main/decontamination)). | ise-uiuc/Magicoder-Evol-Instruct-110K | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"license:apache-2.0",
"region:us"
] | 2023-12-03T20:05:56+00:00 | {"license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "conversational"]} | 2023-12-28T03:23:17+00:00 | [] | [] | TAGS
#task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #license-apache-2.0 #region-us
|
A decontaminated version of evol-codealpaca-v1. Decontamination is done in the same way as StarCoder (bigcode decontamination process). | [] | [
"TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #license-apache-2.0 #region-us \n"
] | [
47
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #license-apache-2.0 #region-us \n"
] |
4617836ccd811d51dabfc78a908e4de0efc8f03f |
# Dataset Card for Evaluation run of Walmart-the-bag/Misted-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Walmart-the-bag/Misted-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Walmart-the-bag/Misted-7B](https://huggingface.co/Walmart-the-bag/Misted-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Walmart-the-bag__Misted-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T20:09:02.176077](https://huggingface.co/datasets/open-llm-leaderboard/details_Walmart-the-bag__Misted-7B/blob/main/results_2023-12-03T20-09-02.176077.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5959059893858984,
"acc_stderr": 0.01351675297272172
},
"harness|gsm8k|5": {
"acc": 0.5959059893858984,
"acc_stderr": 0.01351675297272172
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Walmart-the-bag__Misted-7B | [
"region:us"
] | 2023-12-03T20:09:04+00:00 | {"pretty_name": "Evaluation run of Walmart-the-bag/Misted-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Walmart-the-bag/Misted-7B](https://huggingface.co/Walmart-the-bag/Misted-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Walmart-the-bag__Misted-7B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T20:09:02.176077](https://huggingface.co/datasets/open-llm-leaderboard/details_Walmart-the-bag__Misted-7B/blob/main/results_2023-12-03T20-09-02.176077.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5959059893858984,\n \"acc_stderr\": 0.01351675297272172\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5959059893858984,\n \"acc_stderr\": 0.01351675297272172\n }\n}\n```", "repo_url": "https://huggingface.co/Walmart-the-bag/Misted-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T20_09_02.176077", "path": ["**/details_harness|gsm8k|5_2023-12-03T20-09-02.176077.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T20-09-02.176077.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T20_09_02.176077", "path": ["results_2023-12-03T20-09-02.176077.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T20-09-02.176077.parquet"]}]}]} | 2023-12-03T20:09:09+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Walmart-the-bag/Misted-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Walmart-the-bag/Misted-7B on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T20:09:02.176077(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Walmart-the-bag/Misted-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Walmart-the-bag/Misted-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T20:09:02.176077(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Walmart-the-bag/Misted-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Walmart-the-bag/Misted-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T20:09:02.176077(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Walmart-the-bag/Misted-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Walmart-the-bag/Misted-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T20:09:02.176077(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
189ab7c05788d67d5f5d4f671d5de64ea9e35804 | # Dataset Card for "OpenWebText-2M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | maxtli/OpenWebText-2M | [
"region:us"
] | 2023-12-03T20:41:54+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4970246069, "num_examples": 1000000}], "download_size": 3024267517, "dataset_size": 4970246069}} | 2023-12-03T21:03:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "OpenWebText-2M"
More Information needed | [
"# Dataset Card for \"OpenWebText-2M\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"OpenWebText-2M\"\n\nMore Information needed"
] | [
6,
15
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"OpenWebText-2M\"\n\nMore Information needed"
] |
25072a276f932bacb047b55c93f94f0bbe10417e | # Dataset Card for "patched_test_f_SPOUT_ps_50_preds_v2023d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | roa7n/patched_test_f_SPOUT_ps_50_preds_v2023d | [
"region:us"
] | 2023-12-03T21:01:41+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "m1_preds", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 1971093, "num_examples": 5816}], "download_size": 219220, "dataset_size": 1971093}} | 2023-12-03T21:01:43+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "patched_test_f_SPOUT_ps_50_preds_v2023d"
More Information needed | [
"# Dataset Card for \"patched_test_f_SPOUT_ps_50_preds_v2023d\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"patched_test_f_SPOUT_ps_50_preds_v2023d\"\n\nMore Information needed"
] | [
6,
31
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"patched_test_f_SPOUT_ps_50_preds_v2023d\"\n\nMore Information needed"
] |
53df73bd3560458e37880c366986bb1f68a5f923 |
# Retsinformation retrieval benchmark
Datasets related to generating a Q & Chunk dataset based on guides (vejledninger) from retsinformation.dk to be used as a retrieval benchmark.
vejledninger_tekst.csv contains a dict with all vejledninger (scraped 8/11/23) from retsinformation.dk
chunks_id_text.csv contains text chunks of max 512 token len, based on splitting all the text from vejledninger_tekst.csv, along with a unique id
chunks_questions_100_samples.csv contains a sample of 200 auto-generated questions, based on the first 100 text chunks from the chunks_id_text.csv file, along with the matching text chunk. | jealk/dk_retrieval_benchmark | [
"size_categories:10K<n<100K",
"language:da",
"region:us"
] | 2023-12-03T21:28:52+00:00 | {"language": ["da"], "size_categories": ["10K<n<100K"], "pretty_name": "Retsinformation DK Retrieval Benchmark", "dataset_info": [{"config_name": "generated_questions", "features": [{"name": "title_vejledning", "dtype": "string"}, {"name": "chunk_text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "generated_question", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 263556, "num_examples": 200}], "download_size": 48578, "dataset_size": 263556}, {"config_name": "retsinformation", "features": [{"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "html_content", "dtype": "string"}, {"name": "text_content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62646653, "num_examples": 433}], "download_size": 20333540, "dataset_size": 62646653}], "configs": [{"config_name": "generated_questions", "data_files": [{"split": "train", "path": "generated_questions/train-*"}]}, {"config_name": "retsinformation", "data_files": [{"split": "train", "path": "retsinformation/train-*"}]}]} | 2024-02-11T17:55:13+00:00 | [] | [
"da"
] | TAGS
#size_categories-10K<n<100K #language-Danish #region-us
|
# Retsinformation retrieval benchmark
Datasets related to generating a Q & Chunk dataset based on guides (vejledninger) from URL to be used as a retrieval benchmark.
vejledninger_tekst.csv contains a dict with all vejledninger (scraped 8/11/23) from URL
chunks_id_text.csv contains text chunks of max 512 token len, based on splitting all the text from vejledninger_tekst.csv, along with a unique id
chunks_questions_100_samples.csv contains a sample of 200 auto-generated questions, based on the first 100 text chunks from the chunks_id_text.csv file, along with the matching text chunk. | [
"# Retsinformation retrieval benchmark\n\nDatasets related to generating a Q & Chunk dataset based on guides (vejledninger) from URL to be used as a retrieval benchmark.\n\nvejledninger_tekst.csv contains a dict with all vejledninger (scraped 8/11/23) from URL \n\nchunks_id_text.csv contains text chunks of max 512 token len, based on splitting all the text from vejledninger_tekst.csv, along with a unique id\n\nchunks_questions_100_samples.csv contains a sample of 200 auto-generated questions, based on the first 100 text chunks from the chunks_id_text.csv file, along with the matching text chunk."
] | [
"TAGS\n#size_categories-10K<n<100K #language-Danish #region-us \n",
"# Retsinformation retrieval benchmark\n\nDatasets related to generating a Q & Chunk dataset based on guides (vejledninger) from URL to be used as a retrieval benchmark.\n\nvejledninger_tekst.csv contains a dict with all vejledninger (scraped 8/11/23) from URL \n\nchunks_id_text.csv contains text chunks of max 512 token len, based on splitting all the text from vejledninger_tekst.csv, along with a unique id\n\nchunks_questions_100_samples.csv contains a sample of 200 auto-generated questions, based on the first 100 text chunks from the chunks_id_text.csv file, along with the matching text chunk."
] | [
23,
164
] | [
"passage: TAGS\n#size_categories-10K<n<100K #language-Danish #region-us \n# Retsinformation retrieval benchmark\n\nDatasets related to generating a Q & Chunk dataset based on guides (vejledninger) from URL to be used as a retrieval benchmark.\n\nvejledninger_tekst.csv contains a dict with all vejledninger (scraped 8/11/23) from URL \n\nchunks_id_text.csv contains text chunks of max 512 token len, based on splitting all the text from vejledninger_tekst.csv, along with a unique id\n\nchunks_questions_100_samples.csv contains a sample of 200 auto-generated questions, based on the first 100 text chunks from the chunks_id_text.csv file, along with the matching text chunk."
] |
2170745633eb04640cd321dda57ef5dbc58df8f0 |
# Dataset Card for Evaluation run of bhenrym14/platypus-yi-34b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/bhenrym14/platypus-yi-34b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [bhenrym14/platypus-yi-34b](https://huggingface.co/bhenrym14/platypus-yi-34b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bhenrym14__platypus-yi-34b",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T21:40:07.043539](https://huggingface.co/datasets/open-llm-leaderboard/details_bhenrym14__platypus-yi-34b/blob/main/results_2023-12-03T21-40-07.043539.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6103108415466262,
"acc_stderr": 0.01343312323611072
},
"harness|gsm8k|5": {
"acc": 0.6103108415466262,
"acc_stderr": 0.01343312323611072
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_bhenrym14__platypus-yi-34b | [
"region:us"
] | 2023-12-03T21:36:59+00:00 | {"pretty_name": "Evaluation run of bhenrym14/platypus-yi-34b", "dataset_summary": "Dataset automatically created during the evaluation run of model [bhenrym14/platypus-yi-34b](https://huggingface.co/bhenrym14/platypus-yi-34b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bhenrym14__platypus-yi-34b\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T21:40:07.043539](https://huggingface.co/datasets/open-llm-leaderboard/details_bhenrym14__platypus-yi-34b/blob/main/results_2023-12-03T21-40-07.043539.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6103108415466262,\n \"acc_stderr\": 0.01343312323611072\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6103108415466262,\n \"acc_stderr\": 0.01343312323611072\n }\n}\n```", "repo_url": "https://huggingface.co/bhenrym14/platypus-yi-34b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T21_36_56.890995", "path": ["**/details_harness|gsm8k|5_2023-12-03T21-36-56.890995.parquet"]}, {"split": "2023_12_03T21_40_07.043539", "path": ["**/details_harness|gsm8k|5_2023-12-03T21-40-07.043539.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T21-40-07.043539.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T21_36_56.890995", "path": ["results_2023-12-03T21-36-56.890995.parquet"]}, {"split": "2023_12_03T21_40_07.043539", "path": ["results_2023-12-03T21-40-07.043539.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T21-40-07.043539.parquet"]}]}]} | 2023-12-03T21:40:15+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of bhenrym14/platypus-yi-34b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model bhenrym14/platypus-yi-34b on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T21:40:07.043539(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of bhenrym14/platypus-yi-34b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model bhenrym14/platypus-yi-34b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T21:40:07.043539(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of bhenrym14/platypus-yi-34b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model bhenrym14/platypus-yi-34b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T21:40:07.043539(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of bhenrym14/platypus-yi-34b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model bhenrym14/platypus-yi-34b on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T21:40:07.043539(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
4bf95af7991c5b4dc39cacfcbbc6b43c8166a01d |
# Dataset Card for Evaluation run of brucethemoose/Capybara-Tess-Yi-34B-200K
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [brucethemoose/Capybara-Tess-Yi-34B-200K](https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_brucethemoose__Capybara-Tess-Yi-34B-200K",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T21:47:28.833040](https://huggingface.co/datasets/open-llm-leaderboard/details_brucethemoose__Capybara-Tess-Yi-34B-200K/blob/main/results_2023-12-03T21-47-28.833040.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5739196360879454,
"acc_stderr": 0.013621144396086712
},
"harness|gsm8k|5": {
"acc": 0.5739196360879454,
"acc_stderr": 0.013621144396086712
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_brucethemoose__Capybara-Tess-Yi-34B-200K | [
"region:us"
] | 2023-12-03T21:41:09+00:00 | {"pretty_name": "Evaluation run of brucethemoose/Capybara-Tess-Yi-34B-200K", "dataset_summary": "Dataset automatically created during the evaluation run of model [brucethemoose/Capybara-Tess-Yi-34B-200K](https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_brucethemoose__Capybara-Tess-Yi-34B-200K\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T21:47:28.833040](https://huggingface.co/datasets/open-llm-leaderboard/details_brucethemoose__Capybara-Tess-Yi-34B-200K/blob/main/results_2023-12-03T21-47-28.833040.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5739196360879454,\n \"acc_stderr\": 0.013621144396086712\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5739196360879454,\n \"acc_stderr\": 0.013621144396086712\n }\n}\n```", "repo_url": "https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_03T21_41_07.092884", "path": ["**/details_harness|gsm8k|5_2023-12-03T21-41-07.092884.parquet"]}, {"split": "2023_12_03T21_47_28.833040", "path": ["**/details_harness|gsm8k|5_2023-12-03T21-47-28.833040.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T21-47-28.833040.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_03T21_41_07.092884", "path": ["results_2023-12-03T21-41-07.092884.parquet"]}, {"split": "2023_12_03T21_47_28.833040", "path": ["results_2023-12-03T21-47-28.833040.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T21-47-28.833040.parquet"]}]}]} | 2023-12-03T21:47:35+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of brucethemoose/Capybara-Tess-Yi-34B-200K
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model brucethemoose/Capybara-Tess-Yi-34B-200K on the Open LLM Leaderboard.
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T21:47:28.833040(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of brucethemoose/Capybara-Tess-Yi-34B-200K",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model brucethemoose/Capybara-Tess-Yi-34B-200K on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T21:47:28.833040(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of brucethemoose/Capybara-Tess-Yi-34B-200K",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model brucethemoose/Capybara-Tess-Yi-34B-200K on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T21:47:28.833040(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
29,
31,
178,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of brucethemoose/Capybara-Tess-Yi-34B-200K## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model brucethemoose/Capybara-Tess-Yi-34B-200K on the Open LLM Leaderboard.\n\nThe dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T21:47:28.833040(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
9a8bb0a904e449fbf6a208b2fea38a418504d742 |
Russian translation of [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) dataset.
| IlyaGusev/pippa_ru | [
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:ru",
"license:apache-2.0",
"not-for-all-audiences",
"conversational",
"roleplay",
"region:us"
] | 2023-12-03T21:44:46+00:00 | {"language": ["ru"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["conversational"], "pretty_name": "PIPPA Russian", "tags": ["not-for-all-audiences", "conversational", "roleplay"], "dataset_info": [{"config_name": "default", "features": [{"name": "gpt_35_turbo_result", "dtype": "string"}, {"name": "gpt_35_turbo_explanation", "dtype": "string"}, {"name": "translation_model", "dtype": "string"}, {"name": "bot_name", "dtype": "string"}, {"name": "bot_definitions", "dtype": "string"}, {"name": "orig_bot_definitions", "dtype": "string"}, {"name": "bot_description", "dtype": "string"}, {"name": "orig_bot_description", "dtype": "string"}, {"name": "conversation", "list": [{"name": "is_human", "dtype": "bool"}, {"name": "message", "dtype": "string"}]}, {"name": "orig_conversation", "list": [{"name": "is_human", "dtype": "bool"}, {"name": "message", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 96828729, "num_examples": 6624}], "download_size": 48761680, "dataset_size": 96828729}]} | 2024-01-21T13:04:21+00:00 | [] | [
"ru"
] | TAGS
#task_categories-conversational #size_categories-1K<n<10K #language-Russian #license-apache-2.0 #not-for-all-audiences #conversational #roleplay #region-us
|
Russian translation of PIPPA dataset.
| [] | [
"TAGS\n#task_categories-conversational #size_categories-1K<n<10K #language-Russian #license-apache-2.0 #not-for-all-audiences #conversational #roleplay #region-us \n"
] | [
58
] | [
"passage: TAGS\n#task_categories-conversational #size_categories-1K<n<10K #language-Russian #license-apache-2.0 #not-for-all-audiences #conversational #roleplay #region-us \n"
] |
7318c7cac7cdd5e2e7efc7e86d84256aaf54c8dc | # Dataset Card for Pontoon Translations
<!-- Provide a quick summary of the dataset. -->
This is a dataset containing strings from various Mozilla projects on Mozilla's [Pontoon](https://pontoon.mozilla.org) localization platform and their translations into more than 200 languages.
Source strings are in English.
To avoid rows with values like "None" and "N/A" being interpreted as missing values, pass the keep_default_na parameter like this:
```
from datasets import load_dataset
dataset = load_dataset("ayymen/Pontoon-Translations", keep_default_na=False)
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** Per [Pontoons's terms](https://pontoon.mozilla.org/terms/) "Translations are governed by the [Mozilla Public License 2.0](https://www.mozilla.org/en-US/MPL/2.0/), or another license or set of licenses acceptable to the Mozilla Foundation."
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
- Machine Translation
- Language Identification
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
- Sentence pairs with empty/missing elements were dropped.
- Identical pairs were dropped.
- Rows where the english string does not contain any letters were dropped.
- Leading and trailing whitespace was stripped.
- Rows were deduplicated.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
Pontoon users.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | ayymen/Pontoon-Translations | [
"task_categories:translation",
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language:ab",
"language:ace",
"language:ach",
"language:ady",
"language:af",
"language:am",
"language:an",
"language:ann",
"language:anp",
"language:ar",
"language:arn",
"language:as",
"language:ast",
"language:ay",
"language:az",
"language:azb",
"language:azz",
"language:ba",
"language:ban",
"language:bas",
"language:be",
"language:bg",
"language:bm",
"language:bn",
"language:bo",
"language:br",
"language:brx",
"language:bs",
"language:bxr",
"language:ca",
"language:cak",
"language:ceb",
"language:ckb",
"language:cnh",
"language:co",
"language:crh",
"language:cs",
"language:csb",
"language:cv",
"language:cy",
"language:da",
"language:dag",
"language:de",
"language:dsb",
"language:dv",
"language:dyu",
"language:ee",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:ewo",
"language:fa",
"language:ff",
"language:fi",
"language:fo",
"language:fr",
"language:frp",
"language:fuf",
"language:fur",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gn",
"language:gom",
"language:gor",
"language:gu",
"language:guc",
"language:gv",
"language:ha",
"language:he",
"language:hi",
"language:hil",
"language:hr",
"language:hsb",
"language:ht",
"language:hu",
"language:hus",
"language:hy",
"language:hye",
"language:hyw",
"language:ia",
"language:id",
"language:ie",
"language:ig",
"language:ilo",
"language:is",
"language:it",
"language:ixl",
"language:ja",
"language:jbo",
"language:jiv",
"language:jv",
"language:ka",
"language:kaa",
"language:kab",
"language:kbd",
"language:kk",
"language:km",
"language:kmr",
"language:kn",
"language:ko",
"language:kpv",
"language:ks",
"language:kw",
"language:ky",
"language:lb",
"language:lg",
"language:lij",
"language:ln",
"language:lo",
"language:lt",
"language:ltg",
"language:lus",
"language:lv",
"language:lzz",
"language:mai",
"language:mdf",
"language:meh",
"language:mg",
"language:mhr",
"language:mix",
"language:mk",
"language:ml",
"language:mn",
"language:mni",
"language:mr",
"language:mrj",
"language:ms",
"language:mt",
"language:my",
"language:myv",
"language:nan",
"language:nb",
"language:ne",
"language:nhe",
"language:nhi",
"language:nia",
"language:nl",
"language:nn",
"language:nr",
"language:nso",
"language:nv",
"language:ny",
"language:nyn",
"language:oc",
"language:om",
"language:or",
"language:os",
"language:pa",
"language:pai",
"language:pap",
"language:pl",
"language:ppl",
"language:ps",
"language:pt",
"language:quc",
"language:quy",
"language:qvi",
"language:rm",
"language:ro",
"language:ru",
"language:rw",
"language:sah",
"language:sat",
"language:sc",
"language:scn",
"language:sco",
"language:sd",
"language:ses",
"language:shi",
"language:si",
"language:sk",
"language:skr",
"language:sl",
"language:sn",
"language:snk",
"language:son",
"language:sq",
"language:sr",
"language:ss",
"language:st",
"language:su",
"language:sv",
"language:sw",
"language:syr",
"language:szl",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:ti",
"language:tig",
"language:tk",
"language:tl",
"language:tn",
"language:tok",
"language:tr",
"language:trs",
"language:ts",
"language:tsz",
"language:tt",
"language:tw",
"language:tyv",
"language:tzm",
"language:udm",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:ve",
"language:vec",
"language:vi",
"language:vot",
"language:wo",
"language:xcl",
"language:xh",
"language:yi",
"language:yo",
"language:yua",
"language:yue",
"language:zam",
"language:zgh",
"language:zh",
"language:zu",
"language:zza",
"license:mpl-2.0",
"region:us"
] | 2023-12-03T21:44:52+00:00 | {"annotations_creators": ["crowdsourced"], "language": ["ab", "ace", "ach", "ady", "af", "am", "an", "ann", "anp", "ar", "arn", "as", "ast", "ay", "az", "azb", "azz", "ba", "ban", "bas", "be", "bg", "bm", "bn", "bo", "br", "brx", "bs", "bxr", "ca", "cak", "ceb", "ckb", "cnh", "co", "crh", "cs", "csb", "cv", "cy", "da", "dag", "de", "dsb", "dv", "dyu", "ee", "el", "en", "eo", "es", "et", "eu", "ewo", "fa", "ff", "fi", "fo", "fr", "frp", "fuf", "fur", "fy", "ga", "gd", "gl", "gn", "gom", "gor", "gu", "guc", "gv", "ha", "he", "hi", "hil", "hr", "hsb", "ht", "hu", "hus", "hy", "hye", "hyw", "ia", "id", "ie", "ig", "ilo", "is", "it", "ixl", "ja", "jbo", "jiv", "jv", "ka", "kaa", "kab", "kbd", "kk", "km", "kmr", "kn", "ko", "kpv", "ks", "kw", "ky", "lb", "lg", "lij", "ln", "lo", "lt", "ltg", "lus", "lv", "lzz", "mai", "mdf", "meh", "mg", "mhr", "mix", "mk", "ml", "mn", "mni", "mr", "mrj", "ms", "mt", "my", "myv", "nan", "nb", "ne", "nhe", "nhi", "nia", "nl", "nn", "nr", "nso", "nv", "ny", "nyn", "oc", "om", "or", "os", "pa", "pai", "pap", "pl", "ppl", "ps", "pt", "quc", "quy", "qvi", "rm", "ro", "ru", "rw", "sah", "sat", "sc", "scn", "sco", "sd", "ses", "shi", "si", "sk", "skr", "sl", "sn", "snk", "son", "sq", "sr", "ss", "st", "su", "sv", "sw", "syr", "szl", "ta", "te", "tg", "th", "ti", "tig", "tk", "tl", "tn", "tok", "tr", "trs", "ts", "tsz", "tt", "tw", "tyv", "tzm", "udm", "ug", "uk", "ur", "uz", "ve", "vec", "vi", "vot", "wo", "xcl", "xh", "yi", "yo", "yua", "yue", "zam", "zgh", "zh", "zu", "zza"], "license": "mpl-2.0", "task_categories": ["translation", "text2text-generation"], "pretty_name": "Pontoon Translations", "configs": [{"config_name": "en-ht", "data_files": "en-ht.tsv"}, {"config_name": "en-ab", "data_files": "en-ab.tsv"}, {"config_name": "en-cs", "data_files": "en-cs.tsv"}, {"config_name": "en-nyn", "data_files": "en-nyn.tsv"}, {"config_name": "en-fi", "data_files": "en-fi.tsv"}, {"config_name": "en-nr", "data_files": "en-nr.tsv"}, {"config_name": "en-ace", "data_files": "en-ace.tsv"}, {"config_name": "en-yua", "data_files": "en-yua.tsv"}, {"config_name": "en-zh-CN", "data_files": "en-zh-CN.tsv"}, {"config_name": "en-bs", "data_files": "en-bs.tsv"}, {"config_name": "en-de", "data_files": "en-de.tsv"}, {"config_name": "en-ny", "data_files": "en-ny.tsv"}, {"config_name": "en-ca-valencia", "data_files": "en-ca-valencia.tsv"}, {"config_name": "en-lij", "data_files": "en-lij.tsv"}, {"config_name": "en-cv", "data_files": "en-cv.tsv"}, {"config_name": "en-xh", "data_files": "en-xh.tsv"}, {"config_name": "en-son", "data_files": "en-son.tsv"}, {"config_name": "en-bm", "data_files": "en-bm.tsv"}, {"config_name": "en-gn", "data_files": "en-gn.tsv"}, {"config_name": "en-lb", "data_files": "en-lb.tsv"}, {"config_name": "en-lv", "data_files": "en-lv.tsv"}, {"config_name": "en-pl", "data_files": "en-pl.tsv"}, {"config_name": "en-bo", "data_files": "en-bo.tsv"}, {"config_name": "en-es-AR", "data_files": "en-es-AR.tsv"}, {"config_name": "en-tig", "data_files": "en-tig.tsv"}, {"config_name": "en-nb-NO", "data_files": "en-nb-NO.tsv"}, {"config_name": "en-tk", "data_files": "en-tk.tsv"}, {"config_name": "en-xcl", "data_files": "en-xcl.tsv"}, {"config_name": "en-ann", "data_files": "en-ann.tsv"}, {"config_name": "en-en-CA", "data_files": "en-en-CA.tsv"}, {"config_name": "en-yo", "data_files": "en-yo.tsv"}, {"config_name": "en-mix", "data_files": "en-mix.tsv"}, {"config_name": "en-tn", "data_files": "en-tn.tsv"}, {"config_name": "en-mai", "data_files": "en-mai.tsv"}, {"config_name": "en-cy", "data_files": "en-cy.tsv"}, {"config_name": "en-kmr", "data_files": "en-kmr.tsv"}, {"config_name": "en-bas", "data_files": "en-bas.tsv"}, {"config_name": "en-anp", "data_files": "en-anp.tsv"}, {"config_name": "en-skr", "data_files": "en-skr.tsv"}, {"config_name": "en-quy", "data_files": "en-quy.tsv"}, {"config_name": "en-gu-IN", "data_files": "en-gu-IN.tsv"}, {"config_name": "en-it", "data_files": "en-it.tsv"}, {"config_name": "en-tzm", "data_files": "en-tzm.tsv"}, {"config_name": "en-ne-NP", "data_files": "en-ne-NP.tsv"}, {"config_name": "en-uk", "data_files": "en-uk.tsv"}, {"config_name": "en-lzz", "data_files": "en-lzz.tsv"}, {"config_name": "en-zza", "data_files": "en-zza.tsv"}, {"config_name": "en-gv", "data_files": "en-gv.tsv"}, {"config_name": "en-vi", "data_files": "en-vi.tsv"}, {"config_name": "en-te", "data_files": "en-te.tsv"}, {"config_name": "en-hil", "data_files": "en-hil.tsv"}, {"config_name": "en-quc", "data_files": "en-quc.tsv"}, {"config_name": "en-mr", "data_files": "en-mr.tsv"}, {"config_name": "en-eo", "data_files": "en-eo.tsv"}, {"config_name": "en-ar", "data_files": "en-ar.tsv"}, {"config_name": "en-zam", "data_files": "en-zam.tsv"}, {"config_name": "en-rm-sursilv", "data_files": "en-rm-sursilv.tsv"}, {"config_name": "en-shi", "data_files": "en-shi.tsv"}, {"config_name": "en-sl", "data_files": "en-sl.tsv"}, {"config_name": "en-th", "data_files": "en-th.tsv"}, {"config_name": "en-ks", "data_files": "en-ks.tsv"}, {"config_name": "en-ses", "data_files": "en-ses.tsv"}, {"config_name": "en-pt-PT", "data_files": "en-pt-PT.tsv"}, {"config_name": "en-br", "data_files": "en-br.tsv"}, {"config_name": "en-es-ES", "data_files": "en-es-ES.tsv"}, {"config_name": "en-ppl", "data_files": "en-ppl.tsv"}, {"config_name": "en-ast", "data_files": "en-ast.tsv"}, {"config_name": "en-ia", "data_files": "en-ia.tsv"}, {"config_name": "en-id", "data_files": "en-id.tsv"}, {"config_name": "en-cnh", "data_files": "en-cnh.tsv"}, {"config_name": "en-gd", "data_files": "en-gd.tsv"}, {"config_name": "en-tr", "data_files": "en-tr.tsv"}, {"config_name": "en-es-MX", "data_files": "en-es-MX.tsv"}, {"config_name": "en-fo", "data_files": "en-fo.tsv"}, {"config_name": "en-hus", "data_files": "en-hus.tsv"}, {"config_name": "en-tw", "data_files": "en-tw.tsv"}, {"config_name": "en-brx", "data_files": "en-brx.tsv"}, {"config_name": "en-hi", "data_files": "en-hi.tsv"}, {"config_name": "en-lt", "data_files": "en-lt.tsv"}, {"config_name": "en-ky", "data_files": "en-ky.tsv"}, {"config_name": "en-si", "data_files": "en-si.tsv"}, {"config_name": "en-csb", "data_files": "en-csb.tsv"}, {"config_name": "en-ca", "data_files": "en-ca.tsv"}, {"config_name": "en-bg", "data_files": "en-bg.tsv"}, {"config_name": "en-fa", "data_files": "en-fa.tsv"}, {"config_name": "en-ig", "data_files": "en-ig.tsv"}, {"config_name": "en-kab", "data_files": "en-kab.tsv"}, {"config_name": "en-ay", "data_files": "en-ay.tsv"}, {"config_name": "en-oc", "data_files": "en-oc.tsv"}, {"config_name": "en-hye", "data_files": "en-hye.tsv"}, {"config_name": "en-ru", "data_files": "en-ru.tsv"}, {"config_name": "en-snk", "data_files": "en-snk.tsv"}, {"config_name": "en-ee", "data_files": "en-ee.tsv"}, {"config_name": "en-fur", "data_files": "en-fur.tsv"}, {"config_name": "en-gor", "data_files": "en-gor.tsv"}, {"config_name": "en-udm", "data_files": "en-udm.tsv"}, {"config_name": "en-es", "data_files": "en-es.tsv"}, {"config_name": "en-az", "data_files": "en-az.tsv"}, {"config_name": "en-nia", "data_files": "en-nia.tsv"}, {"config_name": "en-sw", "data_files": "en-sw.tsv"}, {"config_name": "en-nan-tw", "data_files": "en-nan-tw.tsv"}, {"config_name": "en-ja", "data_files": "en-ja.tsv"}, {"config_name": "en-da", "data_files": "en-da.tsv"}, {"config_name": "en-hu", "data_files": "en-hu.tsv"}, {"config_name": "en-nhe", "data_files": "en-nhe.tsv"}, {"config_name": "en-he", "data_files": "en-he.tsv"}, {"config_name": "en-mn", "data_files": "en-mn.tsv"}, {"config_name": "en-os", "data_files": "en-os.tsv"}, {"config_name": "en-mni", "data_files": "en-mni.tsv"}, {"config_name": "en-sc", "data_files": "en-sc.tsv"}, {"config_name": "en-hyw", "data_files": "en-hyw.tsv"}, {"config_name": "en-pt", "data_files": "en-pt.tsv"}, {"config_name": "en-ts", "data_files": "en-ts.tsv"}, {"config_name": "en-ady", "data_files": "en-ady.tsv"}, {"config_name": "en-ga-IE", "data_files": "en-ga-IE.tsv"}, {"config_name": "en-sr", "data_files": "en-sr.tsv"}, {"config_name": "en-bxr", "data_files": "en-bxr.tsv"}, {"config_name": "en-mk", "data_files": "en-mk.tsv"}, {"config_name": "en-lo", "data_files": "en-lo.tsv"}, {"config_name": "en-ckb", "data_files": "en-ckb.tsv"}, {"config_name": "en-sah", "data_files": "en-sah.tsv"}, {"config_name": "en-kk", "data_files": "en-kk.tsv"}, {"config_name": "en-nn-NO", "data_files": "en-nn-NO.tsv"}, {"config_name": "en-eu", "data_files": "en-eu.tsv"}, {"config_name": "en-ro", "data_files": "en-ro.tsv"}, {"config_name": "en-es-CL", "data_files": "en-es-CL.tsv"}, {"config_name": "en-cak", "data_files": "en-cak.tsv"}, {"config_name": "en-st", "data_files": "en-st.tsv"}, {"config_name": "en-am", "data_files": "en-am.tsv"}, {"config_name": "en-as", "data_files": "en-as.tsv"}, {"config_name": "en-kw", "data_files": "en-kw.tsv"}, {"config_name": "en-vot", "data_files": "en-vot.tsv"}, {"config_name": "en-tg", "data_files": "en-tg.tsv"}, {"config_name": "en-kn", "data_files": "en-kn.tsv"}, {"config_name": "en-ml", "data_files": "en-ml.tsv"}, {"config_name": "en-vec", "data_files": "en-vec.tsv"}, {"config_name": "en-ss", "data_files": "en-ss.tsv"}, {"config_name": "en-sn", "data_files": "en-sn.tsv"}, {"config_name": "en-pap-AW", "data_files": "en-pap-AW.tsv"}, {"config_name": "en-ha", "data_files": "en-ha.tsv"}, {"config_name": "en-ps", "data_files": "en-ps.tsv"}, {"config_name": "en-azb", "data_files": "en-azb.tsv"}, {"config_name": "en-en-GB", "data_files": "en-en-GB.tsv"}, {"config_name": "en-ewo", "data_files": "en-ewo.tsv"}, {"config_name": "en-tl", "data_files": "en-tl.tsv"}, {"config_name": "en-gl", "data_files": "en-gl.tsv"}, {"config_name": "en-bn-BD", "data_files": "en-bn-BD.tsv"}, {"config_name": "en-rw", "data_files": "en-rw.tsv"}, {"config_name": "en-mg", "data_files": "en-mg.tsv"}, {"config_name": "en-tok", "data_files": "en-tok.tsv"}, {"config_name": "en-tyv", "data_files": "en-tyv.tsv"}, {"config_name": "en-fy-NL", "data_files": "en-fy-NL.tsv"}, {"config_name": "en-dyu", "data_files": "en-dyu.tsv"}, {"config_name": "en-kpv", "data_files": "en-kpv.tsv"}, {"config_name": "en-pa-IN", "data_files": "en-pa-IN.tsv"}, {"config_name": "en-jv", "data_files": "en-jv.tsv"}, {"config_name": "en-meh", "data_files": "en-meh.tsv"}, {"config_name": "en-azz", "data_files": "en-azz.tsv"}, {"config_name": "en-pa-PK", "data_files": "en-pa-PK.tsv"}, {"config_name": "en-rm-vallader", "data_files": "en-rm-vallader.tsv"}, {"config_name": "en-nhi", "data_files": "en-nhi.tsv"}, {"config_name": "en-hsb", "data_files": "en-hsb.tsv"}, {"config_name": "en-be", "data_files": "en-be.tsv"}, {"config_name": "en-ba", "data_files": "en-ba.tsv"}, {"config_name": "en-en-ZA", "data_files": "en-en-ZA.tsv"}, {"config_name": "en-ug", "data_files": "en-ug.tsv"}, {"config_name": "en-ka", "data_files": "en-ka.tsv"}, {"config_name": "en-mhr", "data_files": "en-mhr.tsv"}, {"config_name": "en-sd", "data_files": "en-sd.tsv"}, {"config_name": "en-tt", "data_files": "en-tt.tsv"}, {"config_name": "en-yue", "data_files": "en-yue.tsv"}, {"config_name": "en-arn", "data_files": "en-arn.tsv"}, {"config_name": "en-ve", "data_files": "en-ve.tsv"}, {"config_name": "en-fr", "data_files": "en-fr.tsv"}, {"config_name": "en-lus", "data_files": "en-lus.tsv"}, {"config_name": "en-kaa", "data_files": "en-kaa.tsv"}, {"config_name": "en-el", "data_files": "en-el.tsv"}, {"config_name": "en-dag", "data_files": "en-dag.tsv"}, {"config_name": "en-hy-AM", "data_files": "en-hy-AM.tsv"}, {"config_name": "en-nl", "data_files": "en-nl.tsv"}, {"config_name": "en-pt-BR", "data_files": "en-pt-BR.tsv"}, {"config_name": "en-ti", "data_files": "en-ti.tsv"}, {"config_name": "en-trs", "data_files": "en-trs.tsv"}, {"config_name": "en-zgh", "data_files": "en-zgh.tsv", "default": true}, {"config_name": "en-ban", "data_files": "en-ban.tsv"}, {"config_name": "en-is", "data_files": "en-is.tsv"}, {"config_name": "en-ceb", "data_files": "en-ceb.tsv"}, {"config_name": "en-hi-IN", "data_files": "en-hi-IN.tsv"}, {"config_name": "en-nv", "data_files": "en-nv.tsv"}, {"config_name": "en-dsb", "data_files": "en-dsb.tsv"}, {"config_name": "en-ltg", "data_files": "en-ltg.tsv"}, {"config_name": "en-ln", "data_files": "en-ln.tsv"}, {"config_name": "en-ur", "data_files": "en-ur.tsv"}, {"config_name": "en-sat", "data_files": "en-sat.tsv"}, {"config_name": "en-om", "data_files": "en-om.tsv"}, {"config_name": "en-yi", "data_files": "en-yi.tsv"}, {"config_name": "en-fuf", "data_files": "en-fuf.tsv"}, {"config_name": "en-mt", "data_files": "en-mt.tsv"}, {"config_name": "en-zh-TW", "data_files": "en-zh-TW.tsv"}, {"config_name": "en-sq", "data_files": "en-sq.tsv"}, {"config_name": "en-qvi", "data_files": "en-qvi.tsv"}, {"config_name": "en-ff", "data_files": "en-ff.tsv"}, {"config_name": "en-et", "data_files": "en-et.tsv"}, {"config_name": "en-guc", "data_files": "en-guc.tsv"}, {"config_name": "en-af", "data_files": "en-af.tsv"}, {"config_name": "en-gom", "data_files": "en-gom.tsv"}, {"config_name": "en-ilo", "data_files": "en-ilo.tsv"}, {"config_name": "en-co", "data_files": "en-co.tsv"}, {"config_name": "en-rm", "data_files": "en-rm.tsv"}, {"config_name": "en-sv-SE", "data_files": "en-sv-SE.tsv"}, {"config_name": "en-ko", "data_files": "en-ko.tsv"}, {"config_name": "en-jbo", "data_files": "en-jbo.tsv"}, {"config_name": "en-sk", "data_files": "en-sk.tsv"}, {"config_name": "en-kbd", "data_files": "en-kbd.tsv"}, {"config_name": "en-ta", "data_files": "en-ta.tsv"}, {"config_name": "en-myv", "data_files": "en-myv.tsv"}, {"config_name": "en-syr", "data_files": "en-syr.tsv"}, {"config_name": "en-uz", "data_files": "en-uz.tsv"}, {"config_name": "en-crh", "data_files": "en-crh.tsv"}, {"config_name": "en-mrj", "data_files": "en-mrj.tsv"}, {"config_name": "en-szl", "data_files": "en-szl.tsv"}, {"config_name": "en-tsz", "data_files": "en-tsz.tsv"}, {"config_name": "en-ach", "data_files": "en-ach.tsv"}, {"config_name": "en-mdf", "data_files": "en-mdf.tsv"}, {"config_name": "en-hr", "data_files": "en-hr.tsv"}, {"config_name": "en-ixl", "data_files": "en-ixl.tsv"}, {"config_name": "en-ie", "data_files": "en-ie.tsv"}, {"config_name": "en-sco", "data_files": "en-sco.tsv"}, {"config_name": "en-zh-HK", "data_files": "en-zh-HK.tsv"}, {"config_name": "en-wo", "data_files": "en-wo.tsv"}, {"config_name": "en-bn", "data_files": "en-bn.tsv"}, {"config_name": "en-bn-IN", "data_files": "en-bn-IN.tsv"}, {"config_name": "en-nso", "data_files": "en-nso.tsv"}, {"config_name": "en-dv", "data_files": "en-dv.tsv"}, {"config_name": "en-jiv", "data_files": "en-jiv.tsv"}, {"config_name": "en-an", "data_files": "en-an.tsv"}, {"config_name": "en-km", "data_files": "en-km.tsv"}, {"config_name": "en-or", "data_files": "en-or.tsv"}, {"config_name": "en-zu", "data_files": "en-zu.tsv"}, {"config_name": "en-su", "data_files": "en-su.tsv"}, {"config_name": "en-pai", "data_files": "en-pai.tsv"}, {"config_name": "en-my", "data_files": "en-my.tsv"}, {"config_name": "en-scn", "data_files": "en-scn.tsv"}, {"config_name": "en-frp", "data_files": "en-frp.tsv"}, {"config_name": "en-ms", "data_files": "en-ms.tsv"}, {"config_name": "en-lg", "data_files": "en-lg.tsv"}]} | 2024-01-19T21:32:07+00:00 | [] | [
"ab",
"ace",
"ach",
"ady",
"af",
"am",
"an",
"ann",
"anp",
"ar",
"arn",
"as",
"ast",
"ay",
"az",
"azb",
"azz",
"ba",
"ban",
"bas",
"be",
"bg",
"bm",
"bn",
"bo",
"br",
"brx",
"bs",
"bxr",
"ca",
"cak",
"ceb",
"ckb",
"cnh",
"co",
"crh",
"cs",
"csb",
"cv",
"cy",
"da",
"dag",
"de",
"dsb",
"dv",
"dyu",
"ee",
"el",
"en",
"eo",
"es",
"et",
"eu",
"ewo",
"fa",
"ff",
"fi",
"fo",
"fr",
"frp",
"fuf",
"fur",
"fy",
"ga",
"gd",
"gl",
"gn",
"gom",
"gor",
"gu",
"guc",
"gv",
"ha",
"he",
"hi",
"hil",
"hr",
"hsb",
"ht",
"hu",
"hus",
"hy",
"hye",
"hyw",
"ia",
"id",
"ie",
"ig",
"ilo",
"is",
"it",
"ixl",
"ja",
"jbo",
"jiv",
"jv",
"ka",
"kaa",
"kab",
"kbd",
"kk",
"km",
"kmr",
"kn",
"ko",
"kpv",
"ks",
"kw",
"ky",
"lb",
"lg",
"lij",
"ln",
"lo",
"lt",
"ltg",
"lus",
"lv",
"lzz",
"mai",
"mdf",
"meh",
"mg",
"mhr",
"mix",
"mk",
"ml",
"mn",
"mni",
"mr",
"mrj",
"ms",
"mt",
"my",
"myv",
"nan",
"nb",
"ne",
"nhe",
"nhi",
"nia",
"nl",
"nn",
"nr",
"nso",
"nv",
"ny",
"nyn",
"oc",
"om",
"or",
"os",
"pa",
"pai",
"pap",
"pl",
"ppl",
"ps",
"pt",
"quc",
"quy",
"qvi",
"rm",
"ro",
"ru",
"rw",
"sah",
"sat",
"sc",
"scn",
"sco",
"sd",
"ses",
"shi",
"si",
"sk",
"skr",
"sl",
"sn",
"snk",
"son",
"sq",
"sr",
"ss",
"st",
"su",
"sv",
"sw",
"syr",
"szl",
"ta",
"te",
"tg",
"th",
"ti",
"tig",
"tk",
"tl",
"tn",
"tok",
"tr",
"trs",
"ts",
"tsz",
"tt",
"tw",
"tyv",
"tzm",
"udm",
"ug",
"uk",
"ur",
"uz",
"ve",
"vec",
"vi",
"vot",
"wo",
"xcl",
"xh",
"yi",
"yo",
"yua",
"yue",
"zam",
"zgh",
"zh",
"zu",
"zza"
] | TAGS
#task_categories-translation #task_categories-text2text-generation #annotations_creators-crowdsourced #language-Abkhazian #language-Achinese #language-Acoli #language-Adyghe #language-Afrikaans #language-Amharic #language-Aragonese #language-Obolo #language-Angika #language-Arabic #language-Mapudungun #language-Assamese #language-Asturian #language-Aymara #language-Azerbaijani #language-South Azerbaijani #language-Highland Puebla Nahuatl #language-Bashkir #language-Balinese #language-Basa (Cameroon) #language-Belarusian #language-Bulgarian #language-Bambara #language-Bengali #language-Tibetan #language-Breton #language-Bodo (India) #language-Bosnian #language-Russia Buriat #language-Catalan #language-Kaqchikel #language-Cebuano #language-Central Kurdish #language-Hakha Chin #language-Corsican #language-Crimean Tatar #language-Czech #language-Kashubian #language-Chuvash #language-Welsh #language-Danish #language-Dagbani #language-German #language-Lower Sorbian #language-Dhivehi #language-Dyula #language-Ewe #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Ewondo #language-Persian #language-Fulah #language-Finnish #language-Faroese #language-French #language-Arpitan #language-Pular #language-Friulian #language-Western Frisian #language-Irish #language-Scottish Gaelic #language-Galician #language-Guarani #language-Goan Konkani #language-Gorontalo #language-Gujarati #language-Wayuu #language-Manx #language-Hausa #language-Hebrew #language-Hindi #language-Hiligaynon #language-Croatian #language-Upper Sorbian #language-Haitian #language-Hungarian #language-Huastec #language-Armenian #language-Armenian #language-Western Armenian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Interlingue #language-Igbo #language-Iloko #language-Icelandic #language-Italian #language-Ixil #language-Japanese #language-Lojban #language-Shuar #language-Javanese #language-Georgian #language-Kara-Kalpak #language-Kabyle #language-Kabardian #language-Kazakh #language-Khmer #language-Northern Kurdish #language-Kannada #language-Korean #language-Komi-Zyrian #language-Kashmiri #language-Cornish #language-Kirghiz #language-Luxembourgish #language-Ganda #language-Ligurian #language-Lingala #language-Lao #language-Lithuanian #language-Latgalian #language-Lushai #language-Latvian #language-Laz #language-Maithili #language-Moksha #language-Southwestern Tlaxiaco Mixtec #language-Malagasy #language-Eastern Mari #language-Mixtepec Mixtec #language-Macedonian #language-Malayalam #language-Mongolian #language-Manipuri #language-Marathi #language-Western Mari #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Erzya #language-Min Nan Chinese #language-Norwegian Bokmål #language-Nepali (macrolanguage) #language-Eastern Huasteca Nahuatl #language-Zacatlán-Ahuacatlán-Tepetzintla Nahuatl #language-Nias #language-Dutch #language-Norwegian Nynorsk #language-South Ndebele #language-Pedi #language-Navajo #language-Nyanja #language-Nyankole #language-Occitan (post 1500) #language-Oromo #language-Oriya (macrolanguage) #language-Ossetian #language-Panjabi #language-Pe #language-Papiamento #language-Polish #language-Pipil #language-Pushto #language-Portuguese #language-K'iche' #language-Ayacucho Quechua #language-Imbabura Highland Quichua #language-Romansh #language-Romanian #language-Russian #language-Kinyarwanda #language-Yakut #language-Santali #language-Sardinian #language-Sicilian #language-Scots #language-Sindhi #language-Koyraboro Senni Songhai #language-Tachelhit #language-Sinhala #language-Slovak #language-Saraiki #language-Slovenian #language-Shona #language-Soninke #language-son #language-Albanian #language-Serbian #language-Swati #language-Southern Sotho #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Syriac #language-Silesian #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Tigrinya #language-Tigre #language-Turkmen #language-Tagalog #language-Tswana #language-Toki Pona #language-Turkish #language-Chicahuaxtla Triqui #language-Tsonga #language-Purepecha #language-Tatar #language-Twi #language-Tuvinian #language-Central Atlas Tamazight #language-Udmurt #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Venda #language-Venetian #language-Vietnamese #language-Votic #language-Wolof #language-Classical Armenian #language-Xhosa #language-Yiddish #language-Yoruba #language-Yucateco #language-Yue Chinese #language-Miahuatlán Zapotec #language-Standard Moroccan Tamazight #language-Chinese #language-Zulu #language-Zaza #license-mpl-2.0 #region-us
| # Dataset Card for Pontoon Translations
This is a dataset containing strings from various Mozilla projects on Mozilla's Pontoon localization platform and their translations into more than 200 languages.
Source strings are in English.
To avoid rows with values like "None" and "N/A" being interpreted as missing values, pass the keep_default_na parameter like this:
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License: Per Pontoons's terms "Translations are governed by the Mozilla Public License 2.0, or another license or set of licenses acceptable to the Mozilla Foundation."
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
- Machine Translation
- Language Identification
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
- Sentence pairs with empty/missing elements were dropped.
- Identical pairs were dropped.
- Rows where the english string does not contain any letters were dropped.
- Leading and trailing whitespace was stripped.
- Rows were deduplicated.
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
Pontoon users.
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Pontoon Translations\n\n\n\nThis is a dataset containing strings from various Mozilla projects on Mozilla's Pontoon localization platform and their translations into more than 200 languages.\nSource strings are in English.\n\nTo avoid rows with values like \"None\" and \"N/A\" being interpreted as missing values, pass the keep_default_na parameter like this:",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License: Per Pontoons's terms \"Translations are governed by the Mozilla Public License 2.0, or another license or set of licenses acceptable to the Mozilla Foundation.\"",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses\n\n\n- Machine Translation\n- Language Identification",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing\n\n\n\n- Sentence pairs with empty/missing elements were dropped.\n- Identical pairs were dropped.\n- Rows where the english string does not contain any letters were dropped.\n- Leading and trailing whitespace was stripped.\n- Rows were deduplicated.",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?\n\n\n\nPontoon users.",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#task_categories-translation #task_categories-text2text-generation #annotations_creators-crowdsourced #language-Abkhazian #language-Achinese #language-Acoli #language-Adyghe #language-Afrikaans #language-Amharic #language-Aragonese #language-Obolo #language-Angika #language-Arabic #language-Mapudungun #language-Assamese #language-Asturian #language-Aymara #language-Azerbaijani #language-South Azerbaijani #language-Highland Puebla Nahuatl #language-Bashkir #language-Balinese #language-Basa (Cameroon) #language-Belarusian #language-Bulgarian #language-Bambara #language-Bengali #language-Tibetan #language-Breton #language-Bodo (India) #language-Bosnian #language-Russia Buriat #language-Catalan #language-Kaqchikel #language-Cebuano #language-Central Kurdish #language-Hakha Chin #language-Corsican #language-Crimean Tatar #language-Czech #language-Kashubian #language-Chuvash #language-Welsh #language-Danish #language-Dagbani #language-German #language-Lower Sorbian #language-Dhivehi #language-Dyula #language-Ewe #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Ewondo #language-Persian #language-Fulah #language-Finnish #language-Faroese #language-French #language-Arpitan #language-Pular #language-Friulian #language-Western Frisian #language-Irish #language-Scottish Gaelic #language-Galician #language-Guarani #language-Goan Konkani #language-Gorontalo #language-Gujarati #language-Wayuu #language-Manx #language-Hausa #language-Hebrew #language-Hindi #language-Hiligaynon #language-Croatian #language-Upper Sorbian #language-Haitian #language-Hungarian #language-Huastec #language-Armenian #language-Armenian #language-Western Armenian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Interlingue #language-Igbo #language-Iloko #language-Icelandic #language-Italian #language-Ixil #language-Japanese #language-Lojban #language-Shuar #language-Javanese #language-Georgian #language-Kara-Kalpak #language-Kabyle #language-Kabardian #language-Kazakh #language-Khmer #language-Northern Kurdish #language-Kannada #language-Korean #language-Komi-Zyrian #language-Kashmiri #language-Cornish #language-Kirghiz #language-Luxembourgish #language-Ganda #language-Ligurian #language-Lingala #language-Lao #language-Lithuanian #language-Latgalian #language-Lushai #language-Latvian #language-Laz #language-Maithili #language-Moksha #language-Southwestern Tlaxiaco Mixtec #language-Malagasy #language-Eastern Mari #language-Mixtepec Mixtec #language-Macedonian #language-Malayalam #language-Mongolian #language-Manipuri #language-Marathi #language-Western Mari #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Erzya #language-Min Nan Chinese #language-Norwegian Bokmål #language-Nepali (macrolanguage) #language-Eastern Huasteca Nahuatl #language-Zacatlán-Ahuacatlán-Tepetzintla Nahuatl #language-Nias #language-Dutch #language-Norwegian Nynorsk #language-South Ndebele #language-Pedi #language-Navajo #language-Nyanja #language-Nyankole #language-Occitan (post 1500) #language-Oromo #language-Oriya (macrolanguage) #language-Ossetian #language-Panjabi #language-Pe #language-Papiamento #language-Polish #language-Pipil #language-Pushto #language-Portuguese #language-K'iche' #language-Ayacucho Quechua #language-Imbabura Highland Quichua #language-Romansh #language-Romanian #language-Russian #language-Kinyarwanda #language-Yakut #language-Santali #language-Sardinian #language-Sicilian #language-Scots #language-Sindhi #language-Koyraboro Senni Songhai #language-Tachelhit #language-Sinhala #language-Slovak #language-Saraiki #language-Slovenian #language-Shona #language-Soninke #language-son #language-Albanian #language-Serbian #language-Swati #language-Southern Sotho #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Syriac #language-Silesian #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Tigrinya #language-Tigre #language-Turkmen #language-Tagalog #language-Tswana #language-Toki Pona #language-Turkish #language-Chicahuaxtla Triqui #language-Tsonga #language-Purepecha #language-Tatar #language-Twi #language-Tuvinian #language-Central Atlas Tamazight #language-Udmurt #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Venda #language-Venetian #language-Vietnamese #language-Votic #language-Wolof #language-Classical Armenian #language-Xhosa #language-Yiddish #language-Yoruba #language-Yucateco #language-Yue Chinese #language-Miahuatlán Zapotec #language-Standard Moroccan Tamazight #language-Chinese #language-Zulu #language-Zaza #license-mpl-2.0 #region-us \n",
"# Dataset Card for Pontoon Translations\n\n\n\nThis is a dataset containing strings from various Mozilla projects on Mozilla's Pontoon localization platform and their translations into more than 200 languages.\nSource strings are in English.\n\nTo avoid rows with values like \"None\" and \"N/A\" being interpreted as missing values, pass the keep_default_na parameter like this:",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License: Per Pontoons's terms \"Translations are governed by the Mozilla Public License 2.0, or another license or set of licenses acceptable to the Mozilla Foundation.\"",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses\n\n\n- Machine Translation\n- Language Identification",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing\n\n\n\n- Sentence pairs with empty/missing elements were dropped.\n- Identical pairs were dropped.\n- Rows where the english string does not contain any letters were dropped.\n- Leading and trailing whitespace was stripped.\n- Rows were deduplicated.",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?\n\n\n\nPontoon users.",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
1478,
85,
4,
75,
29,
10,
4,
9,
6,
5,
7,
4,
68,
10,
9,
5,
13,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: "
] |
5fb334e215252b9dd1c8d51ad308539b31f150f0 | # Dataset Card for "patched_test_f_ATCaseOTCase_ps_40__v2023d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | roa7n/patched_test_f_ATCaseOTCase_ps_40__v2023d | [
"region:us"
] | 2023-12-03T21:46:09+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 47714312, "num_examples": 130287}], "download_size": 4462477, "dataset_size": 47714312}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-15T08:42:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "patched_test_f_ATCaseOTCase_ps_40__v2023d"
More Information needed | [
"# Dataset Card for \"patched_test_f_ATCaseOTCase_ps_40__v2023d\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"patched_test_f_ATCaseOTCase_ps_40__v2023d\"\n\nMore Information needed"
] | [
6,
32
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"patched_test_f_ATCaseOTCase_ps_40__v2023d\"\n\nMore Information needed"
] |
d72535f77a46b6bd05d9d52f2c63d4de115b701f | # MJHQ-30K Benchmark
| Model | Overall FID |
| ------------------------------------- | ----- |
| SDXL-1-0-refiner | 9.55 |
| [playground-v2-1024px-aesthetic](https://huggingface.co/playgroundai/playground-v2-1024px-aesthetic) | **7.07** |

We introduce a new benchmark, **MJHQ-30K**, for automatic evaluation of a model’s aesthetic quality. The benchmark computes FID on a high-quality dataset to gauge aesthetic quality.
We curate the high-quality dataset from Midjourney with 10 common categories, each category with 3K samples. Following common practice, we use aesthetic score and CLIP score to ensure high image quality and high image-text alignment. Furthermore, we take extra care to make the data diverse within each category.
For Playground v2, we report both the overall FID and per-category FID. (All FID metrics are computed at resolution 1024x1024.) From the results, our model outperforms SDXL-1-0-refiner in overall FID and all the categories FID, especially in people and fashion categories. This is inline with the results of the user study, which indicates the correlation between human preferences and the FID score of the MJHQ30K benchmark.
We release this benchmark to the public and encourage the community to adopt it for benchmarking their models’ aesthetic quality.
Please see our [blog](https://blog.playgroundai.com/playground-v2/) for more details.
### Dataset Download
First, download `mjhq30k_imgs.zip`
```python
from huggingface_hub import hf_hub_download
hf_hub_download(
repo_id="playgroundai/MJHQ-30K",
filename="mjhq30k_imgs.zip",
local_dir="path/to/folder",
repo_type="dataset"
)
```
Unzip `mjhq30k_imgs.zip` into its per-category folder structure.
```
root
├── animals
├── art
├── fashion
├── food
├── indoor
├── landscape
├── logo
├── people
├── plants
└── vehicles
```
`meta_data.json` contains metadata including its category and the promp for all the image.
Here is one example. Note that the key is the name of the image file.
```json
"126c23ae9a879fdc05f355f9a72b418d589f3926": {
"category": "plants",
"prompt": "beautiful British garden5, clear sky, unreal engine, detailed, tropical plants ,strong ambient lighting, volumetric lighting, forest plants and leaves, strong light, a lot of leaves in font, 3D effect, 8k render, bright color rendering "
}
```
### Measure FID
To benchmark your model's performance, you need to first generate images using the same prompts in `meta_data.json`.
We calculate our FID using [clean-fid](https://github.com/GaParmar/clean-fid). You can measure the FID between the generated images and the reference images using
```python
from cleanfid import fid
score = fid.compute_fid(ref_dir, gen_dir)
```
### Contributor
Dataset curated by: [Playground](https://playground.com/) Research Team
| playgroundai/MJHQ-30K | [
"task_categories:text-to-image",
"size_categories:10K<n<100K",
"language:en",
"text-to-image",
"playground",
"region:us"
] | 2023-12-03T23:25:06+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["text-to-image"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "animals", "1": "art", "2": "fashion", "3": "food", "4": "indoor", "5": "landscape", "6": "logo", "7": "people", "8": "plants", "9": "vehicles"}}}}], "splits": [{"name": "test", "num_bytes": 9764107710, "num_examples": 30000}], "download_size": 8838465412, "dataset_size": 9764107710}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "tags": ["text-to-image", "playground"]} | 2023-12-06T19:04:08+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-to-image #size_categories-10K<n<100K #language-English #text-to-image #playground #region-us
| MJHQ-30K Benchmark
==================
!image/png
We introduce a new benchmark, MJHQ-30K, for automatic evaluation of a model’s aesthetic quality. The benchmark computes FID on a high-quality dataset to gauge aesthetic quality.
We curate the high-quality dataset from Midjourney with 10 common categories, each category with 3K samples. Following common practice, we use aesthetic score and CLIP score to ensure high image quality and high image-text alignment. Furthermore, we take extra care to make the data diverse within each category.
For Playground v2, we report both the overall FID and per-category FID. (All FID metrics are computed at resolution 1024x1024.) From the results, our model outperforms SDXL-1-0-refiner in overall FID and all the categories FID, especially in people and fashion categories. This is inline with the results of the user study, which indicates the correlation between human preferences and the FID score of the MJHQ30K benchmark.
We release this benchmark to the public and encourage the community to adopt it for benchmarking their models’ aesthetic quality.
Please see our blog for more details.
### Dataset Download
First, download 'mjhq30k\_imgs.zip'
Unzip 'mjhq30k\_imgs.zip' into its per-category folder structure.
'meta\_data.json' contains metadata including its category and the promp for all the image.
Here is one example. Note that the key is the name of the image file.
### Measure FID
To benchmark your model's performance, you need to first generate images using the same prompts in 'meta\_data.json'.
We calculate our FID using clean-fid. You can measure the FID between the generated images and the reference images using
### Contributor
Dataset curated by: Playground Research Team
| [
"### Dataset Download\n\n\nFirst, download 'mjhq30k\\_imgs.zip'\n\n\nUnzip 'mjhq30k\\_imgs.zip' into its per-category folder structure.\n\n\n'meta\\_data.json' contains metadata including its category and the promp for all the image.\n\n\nHere is one example. Note that the key is the name of the image file.",
"### Measure FID\n\n\nTo benchmark your model's performance, you need to first generate images using the same prompts in 'meta\\_data.json'.\n\n\nWe calculate our FID using clean-fid. You can measure the FID between the generated images and the reference images using",
"### Contributor\n\n\nDataset curated by: Playground Research Team"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-10K<n<100K #language-English #text-to-image #playground #region-us \n",
"### Dataset Download\n\n\nFirst, download 'mjhq30k\\_imgs.zip'\n\n\nUnzip 'mjhq30k\\_imgs.zip' into its per-category folder structure.\n\n\n'meta\\_data.json' contains metadata including its category and the promp for all the image.\n\n\nHere is one example. Note that the key is the name of the image file.",
"### Measure FID\n\n\nTo benchmark your model's performance, you need to first generate images using the same prompts in 'meta\\_data.json'.\n\n\nWe calculate our FID using clean-fid. You can measure the FID between the generated images and the reference images using",
"### Contributor\n\n\nDataset curated by: Playground Research Team"
] | [
43,
88,
65,
15
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-10K<n<100K #language-English #text-to-image #playground #region-us \n### Dataset Download\n\n\nFirst, download 'mjhq30k\\_imgs.zip'\n\n\nUnzip 'mjhq30k\\_imgs.zip' into its per-category folder structure.\n\n\n'meta\\_data.json' contains metadata including its category and the promp for all the image.\n\n\nHere is one example. Note that the key is the name of the image file.### Measure FID\n\n\nTo benchmark your model's performance, you need to first generate images using the same prompts in 'meta\\_data.json'.\n\n\nWe calculate our FID using clean-fid. You can measure the FID between the generated images and the reference images using### Contributor\n\n\nDataset curated by: Playground Research Team"
] |
83ebb1cbb661f4d33adee9f2a29ca5159cf390f5 | Question and answer pairs for the first 100 entries of aerospace mechanism symposia 5000 word chunk entries. Full file of entries is here: https://github.com/dsmueller3760/aerospace_chatbot/blob/llm_training/data/AMS/ams_data_answers.jsonl
See this repository for details: https://github.com/dsmueller3760/aerospace_chatbot/tree/main
Prompts generated using TheBloke/Llama-2-7B-Chat-GGUF | ai-aerospace/ams_data_train_generic_v0.1_100 | [
"license:apache-2.0",
"region:us"
] | 2023-12-03T23:54:07+00:00 | {"license": "apache-2.0", "base-model": "TheBloke/Llama-2-7B-Chat-GGUF"} | 2023-12-17T19:57:15+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| Question and answer pairs for the first 100 entries of aerospace mechanism symposia 5000 word chunk entries. Full file of entries is here: URL
See this repository for details: URL
Prompts generated using TheBloke/Llama-2-7B-Chat-GGUF | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] | [
14
] | [
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
be84aeaa32b79773fcaae67c1e414673d6c2270e | # Dataset Card for "vehicle-dataset-v6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | roupenminassian/vehicle-dataset-v6 | [
"region:us"
] | 2023-12-04T00:12:25+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "int64"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "objects", "struct": [{"name": "id", "sequence": "int64"}, {"name": "area", "sequence": "float64"}, {"name": "bbox", "sequence": {"sequence": "float64"}}, {"name": "category", "sequence": "int64"}]}], "splits": [{"name": "train", "num_bytes": 253291624.29, "num_examples": 2222}], "download_size": 248180130, "dataset_size": 253291624.29}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-04T00:14:15+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "vehicle-dataset-v6"
More Information needed | [
"# Dataset Card for \"vehicle-dataset-v6\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"vehicle-dataset-v6\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"vehicle-dataset-v6\"\n\nMore Information needed"
] |
6669562cf1644bdf06bed14066b7bfaa43d3ba1e | # Dataset Card for "adminol-12-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Chunt0/adminol-12-3 | [
"region:us"
] | 2023-12-04T00:37:39+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11968667.0, "num_examples": 34}], "download_size": 11935418, "dataset_size": 11968667.0}} | 2023-12-04T00:39:20+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "adminol-12-3"
More Information needed | [
"# Dataset Card for \"adminol-12-3\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"adminol-12-3\"\n\nMore Information needed"
] | [
6,
14
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"adminol-12-3\"\n\nMore Information needed"
] |
b60caf0129ccd01dc48ed3ad4db39a546d7f78dd | This dataset represents a translated version of the [GLUE/MRPC](https://huggingface.co/datasets/glue/viewer/mrpc) dataset, generated using the Google Translate API. | eljanmahammadli/glue-mrpc-azerbaijani | [
"task_categories:text-classification",
"language:az",
"license:mit",
"region:us"
] | 2023-12-04T00:59:43+00:00 | {"language": ["az"], "license": "mit", "task_categories": ["text-classification"]} | 2023-12-04T17:17:55+00:00 | [] | [
"az"
] | TAGS
#task_categories-text-classification #language-Azerbaijani #license-mit #region-us
| This dataset represents a translated version of the GLUE/MRPC dataset, generated using the Google Translate API. | [] | [
"TAGS\n#task_categories-text-classification #language-Azerbaijani #license-mit #region-us \n"
] | [
29
] | [
"passage: TAGS\n#task_categories-text-classification #language-Azerbaijani #license-mit #region-us \n"
] |
1fab3274c774af9eaebc59c2edeade826153a658 | # [EMNLP'23] ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue
ArXiv: https://arxiv.org/abs/2305.13602
Code: https://github.com/ImKeTT/ReSee
This is the processed data for ReSee, more raw image data are coming...
The data should looks like this:
```
.
├── ./processed_resee_data
├── dd # Contains proccessed entity-level image features and annotations of DailyDialogue
├── processed_img_features
└── img_clip_features.pt
├── test_v0.json
├── valid_v0.json
└── train_v0.json
├── wow # Contains proccessed entity-level image features and annotations of Wizard of Wikipedia
├── processed_img_features
└── img_clip_features.pt
├── test_random_v0.json
├── test_topic_v0.json
├── train_v0.json
├── valid_random_v0.json
└── valid_topic_v0.json
└── shared # Turn-level image features
├── coco
├── flickr30
├── nocaps
├── openimagev6
├── processed_img_features_clip_base # turn-level image features processed by ViT base
├── coco_train_clip_vis_fea.pt
├── coco_val_clip_vis_fea.pt
├── flickr30_clip_vis_fea.pt
├── nocaps_clip_vis_fea.pt
├── openimagev6_test_clip_vis_fea.pt
├── openimagev6_train_clip_vis_fea.pt
├── openimagev6_val_clip_vis_fea.pt
└── oodcv-counterfactual.json
└── processed_img_features_clip_large # turn-level image features processed by ViT large
├── coco_train_clip_vis_fea.pt
├── coco_val_clip_vis_fea.pt
├── flickr30_clip_vis_fea.pt
├── nocaps_clip_vis_fea.pt
├── openimagev6_test_clip_vis_fea.pt
├── openimagev6_train_clip_vis_fea.pt
├── openimagev6_val_clip_vis_fea.pt
└── oodcv-counterfactual.json
``` | PahaII/ReSee_data | [
"license:apache-2.0",
"arxiv:2305.13602",
"region:us"
] | 2023-12-04T01:05:48+00:00 | {"license": "apache-2.0"} | 2023-12-06T05:17:50+00:00 | [
"2305.13602"
] | [] | TAGS
#license-apache-2.0 #arxiv-2305.13602 #region-us
| # [EMNLP'23] ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue
ArXiv: URL
Code: URL
This is the processed data for ReSee, more raw image data are coming...
The data should looks like this:
| [
"# [EMNLP'23] ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue\n\nArXiv: URL\n\nCode: URL\n\nThis is the processed data for ReSee, more raw image data are coming...\n\nThe data should looks like this:"
] | [
"TAGS\n#license-apache-2.0 #arxiv-2305.13602 #region-us \n",
"# [EMNLP'23] ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue\n\nArXiv: URL\n\nCode: URL\n\nThis is the processed data for ReSee, more raw image data are coming...\n\nThe data should looks like this:"
] | [
23,
61
] | [
"passage: TAGS\n#license-apache-2.0 #arxiv-2305.13602 #region-us \n# [EMNLP'23] ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue\n\nArXiv: URL\n\nCode: URL\n\nThis is the processed data for ReSee, more raw image data are coming...\n\nThe data should looks like this:"
] |
475966898116f114cc843f0a1b851a3a27e72ccc | # Dataset Card for "vehicle-dataset-v7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | roupenminassian/vehicle-dataset-v7 | [
"region:us"
] | 2023-12-04T01:13:31+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "int64"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "objects", "struct": [{"name": "id", "sequence": "int64"}, {"name": "area", "sequence": "float64"}, {"name": "bbox", "sequence": {"sequence": "float64"}}, {"name": "category", "sequence": "int64"}]}], "splits": [{"name": "train", "num_bytes": 281009635.7, "num_examples": 2438}], "download_size": 275908173, "dataset_size": 281009635.7}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-04T01:15:34+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "vehicle-dataset-v7"
More Information needed | [
"# Dataset Card for \"vehicle-dataset-v7\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"vehicle-dataset-v7\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"vehicle-dataset-v7\"\n\nMore Information needed"
] |
1ef9f9922283f3bac3d1c0aaa04a2c77912da00e |
This dataset consists of real-world recordings placed in several locations in Hangzhou, China. Researchers may find it useful for fine-tuning Speech-to-Text models. For privacy reasons, the file is encrypted, and anyone interested in the dataset should contact the uploader for the access key. However, access will only be granted to those who can provide a reasonable explanation of their intended use of the data and agree to respect the privacy of the individuals in the recordings. | PieDiscover/recordings_20230716_20230729 | [
"license:mit",
"region:us"
] | 2023-12-04T01:58:19+00:00 | {"license": "mit"} | 2023-12-04T02:34:06+00:00 | [] | [] | TAGS
#license-mit #region-us
|
This dataset consists of real-world recordings placed in several locations in Hangzhou, China. Researchers may find it useful for fine-tuning Speech-to-Text models. For privacy reasons, the file is encrypted, and anyone interested in the dataset should contact the uploader for the access key. However, access will only be granted to those who can provide a reasonable explanation of their intended use of the data and agree to respect the privacy of the individuals in the recordings. | [] | [
"TAGS\n#license-mit #region-us \n"
] | [
11
] | [
"passage: TAGS\n#license-mit #region-us \n"
] |
083b8b3e9d7b292b844c0caa499c64eb23c8b55e | # Dataset Card for "fin_instruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gagan3012/fin_instruct | [
"region:us"
] | 2023-12-04T02:05:29+00:00 | {"dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 684832310.8001372, "num_examples": 306030}, {"name": "test", "num_bytes": 6919261.199862838, "num_examples": 3092}], "download_size": 343942467, "dataset_size": 691751572.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-05T03:24:41+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "fin_instruct"
More Information needed | [
"# Dataset Card for \"fin_instruct\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"fin_instruct\"\n\nMore Information needed"
] | [
6,
14
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"fin_instruct\"\n\nMore Information needed"
] |
7f7f1ee806915e39ea88f77704d3f9783e49ca38 | # Corpus
This dataset is built from Magicdata [ASR-CNANDIACSC: A CHINESE NANCHANG DIALECT CONVERSATIONAL SPEECH CORPUS](https://magichub.com/datasets/nanchang-dialect-conversational-speech-corpus/)
This corpus is licensed under a [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nc-nd/4.0/). Please refer to the license for further information.
Modifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.
# Usage
To load this dataset, use
```python
from datasets import load_dataset
dialect_corpus = load_dataset("TingChen-ppmc/Nanchang_Dialect_Conversational_Speech_Corpus")
```
This dataset only has train split. To split out a test split, use
```python
from datasets import load_dataset
train_split = load_dataset("TingChen-ppmc/Nanchang_Dialect_Conversational_Speech_Corpus", split="train")
# where test=0.5 denotes 0.5 of the dataset will be split to test split
corpus = train_split.train_test_split(test=0.5)
```
A sample data would be
```python
{'audio':
{'path': 'A0001_S001_0_G0001_0.WAV',
'array': array([-0.00030518, -0.00039673,
-0.00036621, ..., -0.00064087,
-0.00015259, -0.00042725]),
'sampling_rate': 16000},
'gender': '女',
'speaker_id': 'G0001',
'transcription': '北京爱数智慧语音采集'
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | TingChen-ppmc/Nanchang_Dialect_Conversational_Speech_Corpus | [
"region:us"
] | 2023-12-04T02:43:20+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "gender", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 151062019.576, "num_examples": 1668}], "download_size": 154068613, "dataset_size": 151062019.576}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-20T15:46:07+00:00 | [] | [] | TAGS
#region-us
| # Corpus
This dataset is built from Magicdata ASR-CNANDIACSC: A CHINESE NANCHANG DIALECT CONVERSATIONAL SPEECH CORPUS
This corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.
Modifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.
# Usage
To load this dataset, use
This dataset only has train split. To split out a test split, use
A sample data would be
More Information needed | [
"# Corpus\n\nThis dataset is built from Magicdata ASR-CNANDIACSC: A CHINESE NANCHANG DIALECT CONVERSATIONAL SPEECH CORPUS\n\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.\n\nModifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.",
"# Usage\n\nTo load this dataset, use\n\n\n\nThis dataset only has train split. To split out a test split, use\n\n\n\nA sample data would be\n\n\n\n\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Corpus\n\nThis dataset is built from Magicdata ASR-CNANDIACSC: A CHINESE NANCHANG DIALECT CONVERSATIONAL SPEECH CORPUS\n\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.\n\nModifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.",
"# Usage\n\nTo load this dataset, use\n\n\n\nThis dataset only has train split. To split out a test split, use\n\n\n\nA sample data would be\n\n\n\n\n\nMore Information needed"
] | [
6,
110,
34
] | [
"passage: TAGS\n#region-us \n# Corpus\n\nThis dataset is built from Magicdata ASR-CNANDIACSC: A CHINESE NANCHANG DIALECT CONVERSATIONAL SPEECH CORPUS\n\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.\n\nModifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.# Usage\n\nTo load this dataset, use\n\n\n\nThis dataset only has train split. To split out a test split, use\n\n\n\nA sample data would be\n\n\n\n\n\nMore Information needed"
] |
72684876cc239596e63564ed87686950985903f7 | # Dataset Card for "translation_pbb_spa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Broomva/translation_pbb_spa | [
"region:us"
] | 2023-12-04T02:46:03+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "translation", "struct": [{"name": "pbb", "dtype": "string"}, {"name": "spa", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 109901, "num_examples": 2428}, {"name": "validation", "num_bytes": 31457, "num_examples": 607}, {"name": "test", "num_bytes": 33789, "num_examples": 759}], "download_size": 137803, "dataset_size": 175147}} | 2023-12-04T02:46:09+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "translation_pbb_spa"
More Information needed | [
"# Dataset Card for \"translation_pbb_spa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"translation_pbb_spa\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"translation_pbb_spa\"\n\nMore Information needed"
] |
4feb0c4bdb33443d11710a966e6130023ab0e938 | # Corpus
This dataset is built from Magicdata [ASR-CZDIACSC: A CHINESE SHANGHAI DIALECT CONVERSATIONAL SPEECH CORPUS](https://magichub.com/datasets/shanghai-dialect-conversational-speech-corpus/)
This corpus is licensed under a [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nc-nd/4.0/). Please refer to the license for further information.
Modifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.
# Usage
To load this dataset, use
```python
from datasets import load_dataset
dialect_corpus = load_dataset("TingChen-ppmc/Shanghai_Dialect_Conversational_Speech_Corpus")
```
This dataset only has train split. To split out a test split, use
```python
from datasets import load_dataset
train_split = load_dataset("TingChen-ppmc/Shanghai_Dialect_Conversational_Speech_Corpus", split="train")
# where test=0.5 denotes 0.5 of the dataset will be split to test split
corpus = train_split.train_test_split(test=0.5)
```
A sample data would be
```python
# note this data is from the Nanchang Dialect corpus, the data format is shared
{'audio':
{'path': 'A0001_S001_0_G0001_0.WAV',
'array': array([-0.00030518, -0.00039673,
-0.00036621, ..., -0.00064087,
-0.00015259, -0.00042725]),
'sampling_rate': 16000},
'gender': '女',
'speaker_id': 'G0001',
'transcription': '北京爱数智慧语音采集'
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | TingChen-ppmc/Shanghai_Dialect_Conversational_Speech_Corpus | [
"region:us"
] | 2023-12-04T02:48:52+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "gender", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 422057259.808, "num_examples": 3792}], "download_size": 436738370, "dataset_size": 422057259.808}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-20T15:53:01+00:00 | [] | [] | TAGS
#region-us
| # Corpus
This dataset is built from Magicdata ASR-CZDIACSC: A CHINESE SHANGHAI DIALECT CONVERSATIONAL SPEECH CORPUS
This corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.
Modifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.
# Usage
To load this dataset, use
This dataset only has train split. To split out a test split, use
A sample data would be
More Information needed | [
"# Corpus\n\nThis dataset is built from Magicdata ASR-CZDIACSC: A CHINESE SHANGHAI DIALECT CONVERSATIONAL SPEECH CORPUS\n\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.\n\nModifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.",
"# Usage\n\nTo load this dataset, use\n\n\n\nThis dataset only has train split. To split out a test split, use\n\n\n\nA sample data would be\n\n\n\n\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Corpus\n\nThis dataset is built from Magicdata ASR-CZDIACSC: A CHINESE SHANGHAI DIALECT CONVERSATIONAL SPEECH CORPUS\n\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.\n\nModifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.",
"# Usage\n\nTo load this dataset, use\n\n\n\nThis dataset only has train split. To split out a test split, use\n\n\n\nA sample data would be\n\n\n\n\n\nMore Information needed"
] | [
6,
111,
34
] | [
"passage: TAGS\n#region-us \n# Corpus\n\nThis dataset is built from Magicdata ASR-CZDIACSC: A CHINESE SHANGHAI DIALECT CONVERSATIONAL SPEECH CORPUS\n\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.\n\nModifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.# Usage\n\nTo load this dataset, use\n\n\n\nThis dataset only has train split. To split out a test split, use\n\n\n\nA sample data would be\n\n\n\n\n\nMore Information needed"
] |
980b7b83baa796360edc3adbed90065f07938be8 | # Corpus
This dataset is built from Magicdata [ASR-CTJDIACSC: A CHINESE TIANJIN DIALECT CONVERSATIONAL SPEECH CORPUS](https://magichub.com/datasets/tianjin-dialect-conversational-speech-corpus-2/)
This corpus is licensed under a [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nc-nd/4.0/). Please refer to the license for further information.
Modifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.
# Usage
To load this dataset, use
```python
from datasets import load_dataset
dialect_corpus = load_dataset("TingChen-ppmc/Tianjin_Dialect_Conversational_Speech_Corpus")
```
This dataset only has train split. To split out a test split, use
```python
from datasets import load_dataset
train_split = load_dataset("TingChen-ppmc/Tianjin_Dialect_Conversational_Speech_Corpus", split="train")
# where test=0.5 denotes 0.5 of the dataset will be split to test split
corpus = train_split.train_test_split(test=0.5)
```
A sample data would be
```python
# note this data is from the Nanchang Dialect corpus, the data format is shared
{'audio':
{'path': 'A0001_S001_0_G0001_0.WAV',
'array': array([-0.00030518, -0.00039673,
-0.00036621, ..., -0.00064087,
-0.00015259, -0.00042725]),
'sampling_rate': 16000},
'gender': '女',
'speaker_id': 'G0001',
'transcription': '北京爱数智慧语音采集'
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | TingChen-ppmc/Tianjin_Dialect_Conversational_Speech_Corpus | [
"region:us"
] | 2023-12-04T02:50:46+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "gender", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 384247410.342, "num_examples": 5166}], "download_size": 370228314, "dataset_size": 384247410.342}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-20T15:50:43+00:00 | [] | [] | TAGS
#region-us
| # Corpus
This dataset is built from Magicdata ASR-CTJDIACSC: A CHINESE TIANJIN DIALECT CONVERSATIONAL SPEECH CORPUS
This corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.
Modifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.
# Usage
To load this dataset, use
This dataset only has train split. To split out a test split, use
A sample data would be
More Information needed | [
"# Corpus\n\nThis dataset is built from Magicdata ASR-CTJDIACSC: A CHINESE TIANJIN DIALECT CONVERSATIONAL SPEECH CORPUS\n\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.\n\nModifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.",
"# Usage\n\nTo load this dataset, use\n\n\n\nThis dataset only has train split. To split out a test split, use\n\n\n\nA sample data would be\n\n\n\n\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Corpus\n\nThis dataset is built from Magicdata ASR-CTJDIACSC: A CHINESE TIANJIN DIALECT CONVERSATIONAL SPEECH CORPUS\n\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.\n\nModifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.",
"# Usage\n\nTo load this dataset, use\n\n\n\nThis dataset only has train split. To split out a test split, use\n\n\n\nA sample data would be\n\n\n\n\n\nMore Information needed"
] | [
6,
112,
34
] | [
"passage: TAGS\n#region-us \n# Corpus\n\nThis dataset is built from Magicdata ASR-CTJDIACSC: A CHINESE TIANJIN DIALECT CONVERSATIONAL SPEECH CORPUS\n\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.\n\nModifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.# Usage\n\nTo load this dataset, use\n\n\n\nThis dataset only has train split. To split out a test split, use\n\n\n\nA sample data would be\n\n\n\n\n\nMore Information needed"
] |
a6cbb0bd343b9e66bf9666038fb59101094ce074 | # Corpus
This dataset is built from Magicdata [ASR-CZDIACSC: A CHINESE ZHENGZHOU DIALECT CONVERSATIONAL SPEECH CORPUS](https://magichub.com/datasets/zhengzhou-dialect-conversational-speech-corpus/)
This corpus is licensed under a [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nc-nd/4.0/). Please refer to the license for further information.
Modifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.
# Usage
To load this dataset, use
```python
from datasets import load_dataset
dialect_corpus = load_dataset("TingChen-ppmc/Zhengzhou_Dialect_Conversational_Speech_Corpus")
```
This dataset only has train split. To split out a test split, use
```python
from datasets import load_dataset
train_split = load_dataset("TingChen-ppmc/Zhengzhou_Dialect_Conversational_Speech_Corpus", split="train")
# where test=0.5 denotes 0.5 of the dataset will be split to test split
corpus = train_split.train_test_split(test=0.5)
```
A sample data would be
```python
# note this data is from the Nanchang Dialect corpus, the data format is shared
{'audio':
{'path': 'A0001_S001_0_G0001_0.WAV',
'array': array([-0.00030518, -0.00039673,
-0.00036621, ..., -0.00064087,
-0.00015259, -0.00042725]),
'sampling_rate': 16000},
'gender': '女',
'speaker_id': 'G0001',
'transcription': '北京爱数智慧语音采集'
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | TingChen-ppmc/Zhengzhou_Dialect_Conversational_Speech_Corpus | [
"region:us"
] | 2023-12-04T02:53:56+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "gender", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 198995934.614, "num_examples": 2006}], "download_size": 179378562, "dataset_size": 198995934.614}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-20T15:51:51+00:00 | [] | [] | TAGS
#region-us
| # Corpus
This dataset is built from Magicdata ASR-CZDIACSC: A CHINESE ZHENGZHOU DIALECT CONVERSATIONAL SPEECH CORPUS
This corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.
Modifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.
# Usage
To load this dataset, use
This dataset only has train split. To split out a test split, use
A sample data would be
More Information needed | [
"# Corpus\n\nThis dataset is built from Magicdata ASR-CZDIACSC: A CHINESE ZHENGZHOU DIALECT CONVERSATIONAL SPEECH CORPUS\n\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.\n\nModifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.",
"# Usage\n\nTo load this dataset, use\n\n\n\nThis dataset only has train split. To split out a test split, use\n\n\n\nA sample data would be\n\n\n\n\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Corpus\n\nThis dataset is built from Magicdata ASR-CZDIACSC: A CHINESE ZHENGZHOU DIALECT CONVERSATIONAL SPEECH CORPUS\n\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.\n\nModifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.",
"# Usage\n\nTo load this dataset, use\n\n\n\nThis dataset only has train split. To split out a test split, use\n\n\n\nA sample data would be\n\n\n\n\n\nMore Information needed"
] | [
6,
112,
34
] | [
"passage: TAGS\n#region-us \n# Corpus\n\nThis dataset is built from Magicdata ASR-CZDIACSC: A CHINESE ZHENGZHOU DIALECT CONVERSATIONAL SPEECH CORPUS\n\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to the license for further information.\n\nModifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.# Usage\n\nTo load this dataset, use\n\n\n\nThis dataset only has train split. To split out a test split, use\n\n\n\nA sample data would be\n\n\n\n\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.