sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
473c2efb0a0b1618dacd09d29d10936b81bcb725
|
# Dataset Card for "oasst1-delib"
Subset of `OpenAssistant/oasst1` with English chat messages that (are supposed to) contain reasoning:
* filtered by keyword "pros"
* includes chat history as extra feature
Dataset creation is documented in https://github.com/logikon-ai/deliberation-datasets/blob/main/notebooks/create_oasst1_delib.ipynb
|
logikon/oasst1-delib
|
[
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-09-21T08:42:05+00:00
|
{"language": ["en"], "license": "apache-2.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "message_id", "dtype": "string"}, {"name": "parent_id", "dtype": "string"}, {"name": "user_id", "dtype": "string"}, {"name": "created_date", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "role", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "review_count", "dtype": "int32"}, {"name": "review_result", "dtype": "bool"}, {"name": "deleted", "dtype": "bool"}, {"name": "rank", "dtype": "float64"}, {"name": "synthetic", "dtype": "bool"}, {"name": "model_name", "dtype": "null"}, {"name": "detoxify", "struct": [{"name": "identity_attack", "dtype": "float64"}, {"name": "insult", "dtype": "float64"}, {"name": "obscene", "dtype": "float64"}, {"name": "severe_toxicity", "dtype": "float64"}, {"name": "sexual_explicit", "dtype": "float64"}, {"name": "threat", "dtype": "float64"}, {"name": "toxicity", "dtype": "float64"}]}, {"name": "message_tree_id", "dtype": "string"}, {"name": "tree_state", "dtype": "string"}, {"name": "emojis", "struct": [{"name": "count", "sequence": "int32"}, {"name": "name", "sequence": "string"}]}, {"name": "labels", "struct": [{"name": "count", "sequence": "int32"}, {"name": "name", "sequence": "string"}, {"name": "value", "sequence": "float64"}]}, {"name": "history", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 278875, "num_examples": 90}, {"name": "validation", "num_bytes": 18290, "num_examples": 6}], "download_size": 208227, "dataset_size": 297165}}
|
2023-09-27T13:23:02+00:00
|
[] |
[
"en"
] |
TAGS
#language-English #license-apache-2.0 #region-us
|
# Dataset Card for "oasst1-delib"
Subset of 'OpenAssistant/oasst1' with English chat messages that (are supposed to) contain reasoning:
* filtered by keyword "pros"
* includes chat history as extra feature
Dataset creation is documented in URL
|
[
"# Dataset Card for \"oasst1-delib\"\n\nSubset of 'OpenAssistant/oasst1' with English chat messages that (are supposed to) contain reasoning:\n\n* filtered by keyword \"pros\"\n* includes chat history as extra feature\n\nDataset creation is documented in URL"
] |
[
"TAGS\n#language-English #license-apache-2.0 #region-us \n",
"# Dataset Card for \"oasst1-delib\"\n\nSubset of 'OpenAssistant/oasst1' with English chat messages that (are supposed to) contain reasoning:\n\n* filtered by keyword \"pros\"\n* includes chat history as extra feature\n\nDataset creation is documented in URL"
] |
[
18,
64
] |
[
"passage: TAGS\n#language-English #license-apache-2.0 #region-us \n# Dataset Card for \"oasst1-delib\"\n\nSubset of 'OpenAssistant/oasst1' with English chat messages that (are supposed to) contain reasoning:\n\n* filtered by keyword \"pros\"\n* includes chat history as extra feature\n\nDataset creation is documented in URL"
] |
13292f0072ceba2d466cb9b7125a104399d2f02b
|
## Overview
Orthophoto (Orthofoto) and LiDAR (Laser) data, which are organized into folders named after the area and year they are from.
## Dataset Structure
- Geodata
- Kristiansand.zip (example)
- fgb
- Vann_22.fgb...
- geojson
- Vann_22.geojson...
- Ortofoto
- Agder_og_Telemark_2021.zip (example)
- Agder_og_Telemark_2021.zip_mosaic_cog.tif
- Laser
- Bergen_2pkt_2010
- Bergen_2pkt_2010_mosaic.laz
## Usage
Currently only been used in QGIS to display Ortofoto, Laser Data and Geodata.
|
kartai/DX_datasett
|
[
"size_categories:n<1K",
"language:no",
"map",
"tif",
"laz",
"fgb",
"geojson",
"region:us"
] |
2023-09-21T08:42:25+00:00
|
{"language": ["no"], "size_categories": ["n<1K"], "pretty_name": "DX_Dataset", "tags": ["map", "tif", "laz", "fgb", "geojson"]}
|
2023-10-18T09:33:31+00:00
|
[] |
[
"no"
] |
TAGS
#size_categories-n<1K #language-Norwegian #map #tif #laz #fgb #geojson #region-us
|
## Overview
Orthophoto (Orthofoto) and LiDAR (Laser) data, which are organized into folders named after the area and year they are from.
## Dataset Structure
- Geodata
- URL (example)
- fgb
- Vann_22.fgb...
- geojson
- Vann_22.geojson...
- Ortofoto
- Agder_og_Telemark_2021.zip (example)
- Agder_og_Telemark_2021.zip_mosaic_cog.tif
- Laser
- Bergen_2pkt_2010
- Bergen_2pkt_2010_mosaic.laz
## Usage
Currently only been used in QGIS to display Ortofoto, Laser Data and Geodata.
|
[
"## Overview\n\nOrthophoto (Orthofoto) and LiDAR (Laser) data, which are organized into folders named after the area and year they are from.",
"## Dataset Structure\n\n- Geodata\n - URL (example)\n - fgb\n - Vann_22.fgb...\n - geojson\n - Vann_22.geojson...\n- Ortofoto\n - Agder_og_Telemark_2021.zip (example)\n - Agder_og_Telemark_2021.zip_mosaic_cog.tif\n- Laser\n - Bergen_2pkt_2010\n - Bergen_2pkt_2010_mosaic.laz",
"## Usage\n\nCurrently only been used in QGIS to display Ortofoto, Laser Data and Geodata."
] |
[
"TAGS\n#size_categories-n<1K #language-Norwegian #map #tif #laz #fgb #geojson #region-us \n",
"## Overview\n\nOrthophoto (Orthofoto) and LiDAR (Laser) data, which are organized into folders named after the area and year they are from.",
"## Dataset Structure\n\n- Geodata\n - URL (example)\n - fgb\n - Vann_22.fgb...\n - geojson\n - Vann_22.geojson...\n- Ortofoto\n - Agder_og_Telemark_2021.zip (example)\n - Agder_og_Telemark_2021.zip_mosaic_cog.tif\n- Laser\n - Bergen_2pkt_2010\n - Bergen_2pkt_2010_mosaic.laz",
"## Usage\n\nCurrently only been used in QGIS to display Ortofoto, Laser Data and Geodata."
] |
[
35,
38,
108,
23
] |
[
"passage: TAGS\n#size_categories-n<1K #language-Norwegian #map #tif #laz #fgb #geojson #region-us \n## Overview\n\nOrthophoto (Orthofoto) and LiDAR (Laser) data, which are organized into folders named after the area and year they are from.## Dataset Structure\n\n- Geodata\n - URL (example)\n - fgb\n - Vann_22.fgb...\n - geojson\n - Vann_22.geojson...\n- Ortofoto\n - Agder_og_Telemark_2021.zip (example)\n - Agder_og_Telemark_2021.zip_mosaic_cog.tif\n- Laser\n - Bergen_2pkt_2010\n - Bergen_2pkt_2010_mosaic.laz## Usage\n\nCurrently only been used in QGIS to display Ortofoto, Laser Data and Geodata."
] |
61bef5640f28944f1d54d011b953350c6ffddd32
|
# Dataset Card for "next-dataset-refined-batch-0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
swaroopajit/next-dataset-refined-batch-0
|
[
"region:us"
] |
2023-09-21T08:43:08+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 331199460.0, "num_examples": 1000}], "download_size": 304483916, "dataset_size": 331199460.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T08:45:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "next-dataset-refined-batch-0"
More Information needed
|
[
"# Dataset Card for \"next-dataset-refined-batch-0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"next-dataset-refined-batch-0\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"next-dataset-refined-batch-0\"\n\nMore Information needed"
] |
e35a3ca64d8a9e94ccba29d8f197f392ea9cccb0
|
# Dataset Card for "next-dataset-refined-batch-1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
swaroopajit/next-dataset-refined-batch-1000
|
[
"region:us"
] |
2023-09-21T09:05:04+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 292983973.0, "num_examples": 1000}], "download_size": 263093694, "dataset_size": 292983973.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T09:06:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "next-dataset-refined-batch-1000"
More Information needed
|
[
"# Dataset Card for \"next-dataset-refined-batch-1000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"next-dataset-refined-batch-1000\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"next-dataset-refined-batch-1000\"\n\nMore Information needed"
] |
524a4017da0dcdd6ad7b694893371a88d7bfa544
|
# Dataset Card for "cartoonizer-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
SminC/cartoonizer-dataset
|
[
"region:us"
] |
2023-09-21T09:11:27+00:00
|
{"dataset_info": {"features": [{"name": "original_image", "dtype": "image"}, {"name": "edit_prompt", "dtype": "string"}, {"name": "cartoonized_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 31770277.0, "num_examples": 50}], "download_size": 31772590, "dataset_size": 31770277.0}}
|
2023-09-21T09:11:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cartoonizer-dataset"
More Information needed
|
[
"# Dataset Card for \"cartoonizer-dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cartoonizer-dataset\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cartoonizer-dataset\"\n\nMore Information needed"
] |
1abcebd24ff9d9e6e47e257cf267e78d43f2ef85
|
# Dataset Card for "next-dataset-refined-batch-2000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
swaroopajit/next-dataset-refined-batch-2000
|
[
"region:us"
] |
2023-09-21T09:27:22+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 303690944.0, "num_examples": 1000}], "download_size": 275266590, "dataset_size": 303690944.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T09:29:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "next-dataset-refined-batch-2000"
More Information needed
|
[
"# Dataset Card for \"next-dataset-refined-batch-2000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"next-dataset-refined-batch-2000\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"next-dataset-refined-batch-2000\"\n\nMore Information needed"
] |
148fc2c3f60418d11b5dbb4fb7477913ed9e56de
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
LRoussel/dessin
|
[
"task_categories:image-to-image",
"size_categories:n<1K",
"language:fr",
"license:openrail",
"region:us"
] |
2023-09-21T09:32:48+00:00
|
{"language": ["fr"], "license": "openrail", "size_categories": ["n<1K"], "task_categories": ["image-to-image"], "pretty_name": "train_dessin"}
|
2023-09-21T09:38:49+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-image-to-image #size_categories-n<1K #language-French #license-openrail #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-image-to-image #size_categories-n<1K #language-French #license-openrail #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
40,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-image-to-image #size_categories-n<1K #language-French #license-openrail #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
624952e3f420ae18d88b31977ee2ea436c833abb
|
The dataset contains 12k examples from [Orca](https://arxiv.org/abs/2306.02707) style dataset [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
|
Intel/orca_dpo_pairs
|
[
"license:apache-2.0",
"arxiv:2306.02707",
"region:us"
] |
2023-09-21T09:35:16+00:00
|
{"license": "apache-2.0"}
|
2023-11-29T14:11:17+00:00
|
[
"2306.02707"
] |
[] |
TAGS
#license-apache-2.0 #arxiv-2306.02707 #region-us
|
The dataset contains 12k examples from Orca style dataset Open-Orca/OpenOrca.
|
[] |
[
"TAGS\n#license-apache-2.0 #arxiv-2306.02707 #region-us \n"
] |
[
22
] |
[
"passage: TAGS\n#license-apache-2.0 #arxiv-2306.02707 #region-us \n"
] |
5ac92ad1af8ef34f9ecd8d54d60a73c0289fe0c3
|
# Materials in Context Dataset (MINC-2500)
## Dataset Description
- **Homepage:** http://opensurfaces.cs.cornell.edu/publications/minc/
- **Paper:** https://openaccess.thecvf.com/content_cvpr_2015/html/Bell_Material_Recognition_in_2015_CVPR_paper.html
## Dataset Summary
(from the website)
MINC-2500 is a patch classification dataset with 2500 samples per category
(Section 5.4 of the paper). This is a subset of MINC where samples have been
sized to 362 x 362 and each category is sampled evenly. The original resolution
images are not needed as we include the extracted patches in the archive.
|
mcimpoi/minc-2500_split_1
|
[
"task_categories:image-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2023-09-21T09:41:35+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["image-classification"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "brick", "1": "carpet", "2": "ceramic", "3": "fabric", "4": "foliage", "5": "food", "6": "glass", "7": "hair", "8": "leather", "9": "metal", "10": "mirror", "11": "other", "12": "painted", "13": "paper", "14": "plastic", "15": "polishedstone", "16": "skin", "17": "sky", "18": "stone", "19": "tile", "20": "wallpaper", "21": "water", "22": "wood"}}}}], "splits": [{"name": "train", "num_bytes": 2017774670.25, "num_examples": 48875}, {"name": "test", "num_bytes": 240662928, "num_examples": 5750}, {"name": "validation", "num_bytes": 135991437.125, "num_examples": 2875}], "download_size": 2350770974, "dataset_size": 2394429035.375}}
|
2023-09-21T09:59:41+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-image-classification #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us
|
# Materials in Context Dataset (MINC-2500)
## Dataset Description
- Homepage: URL
- Paper: URL
## Dataset Summary
(from the website)
MINC-2500 is a patch classification dataset with 2500 samples per category
(Section 5.4 of the paper). This is a subset of MINC where samples have been
sized to 362 x 362 and each category is sampled evenly. The original resolution
images are not needed as we include the extracted patches in the archive.
|
[
"# Materials in Context Dataset (MINC-2500)",
"## Dataset Description\n- Homepage: URL\n- Paper: URL",
"## Dataset Summary\n(from the website)\nMINC-2500 is a patch classification dataset with 2500 samples per category \n(Section 5.4 of the paper). This is a subset of MINC where samples have been\nsized to 362 x 362 and each category is sampled evenly. The original resolution\nimages are not needed as we include the extracted patches in the archive."
] |
[
"TAGS\n#task_categories-image-classification #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us \n",
"# Materials in Context Dataset (MINC-2500)",
"## Dataset Description\n- Homepage: URL\n- Paper: URL",
"## Dataset Summary\n(from the website)\nMINC-2500 is a patch classification dataset with 2500 samples per category \n(Section 5.4 of the paper). This is a subset of MINC where samples have been\nsized to 362 x 362 and each category is sampled evenly. The original resolution\nimages are not needed as we include the extracted patches in the archive."
] |
[
42,
14,
12,
86
] |
[
"passage: TAGS\n#task_categories-image-classification #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us \n# Materials in Context Dataset (MINC-2500)## Dataset Description\n- Homepage: URL\n- Paper: URL## Dataset Summary\n(from the website)\nMINC-2500 is a patch classification dataset with 2500 samples per category \n(Section 5.4 of the paper). This is a subset of MINC where samples have been\nsized to 362 x 362 and each category is sampled evenly. The original resolution\nimages are not needed as we include the extracted patches in the archive."
] |
a4d80a8b2cdc6d86a4454da686d8d76096b2902b
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
Captluke/llama2-wiki-v3
|
[
"language:en",
"region:us"
] |
2023-09-21T09:46:59+00:00
|
{"language": ["en"]}
|
2023-09-21T09:50:19+00:00
|
[] |
[
"en"
] |
TAGS
#language-English #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#language-English #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
10,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#language-English #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
16cc5019db9c135d735d3f5a724ca3f544837198
|
# Dataset Card for "next-dataset-refined-batch-3000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
swaroopajit/next-dataset-refined-batch-3000
|
[
"region:us"
] |
2023-09-21T09:56:06+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 297746292.0, "num_examples": 999}], "download_size": 268205162, "dataset_size": 297746292.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T09:57:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "next-dataset-refined-batch-3000"
More Information needed
|
[
"# Dataset Card for \"next-dataset-refined-batch-3000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"next-dataset-refined-batch-3000\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"next-dataset-refined-batch-3000\"\n\nMore Information needed"
] |
1e4d29f1df18985e65e72ca6f2dec7a02183520d
|
# Dataset Card for "pokemon_caption_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
SminC/pokemon_caption_data
|
[
"region:us"
] |
2023-09-21T10:00:00+00:00
|
{"dataset_info": {"features": [{"name": "original_image", "dtype": "image"}, {"name": "edit_prompt", "dtype": "string"}, {"name": "colored_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 25225724.0, "num_examples": 303}], "download_size": 25174197, "dataset_size": 25225724.0}}
|
2023-09-21T10:09:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pokemon_caption_data"
More Information needed
|
[
"# Dataset Card for \"pokemon_caption_data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pokemon_caption_data\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pokemon_caption_data\"\n\nMore Information needed"
] |
ad9f07b5befaf3ee9e8e6ba0197b60886143de10
|
# Dataset Card for "pxcorpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mattlc/pxcorpus
|
[
"region:us"
] |
2023-09-21T10:02:58+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "duration", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 395874656.0, "num_examples": 1584}, {"name": "test", "num_bytes": 95619380.0, "num_examples": 397}], "download_size": 465681717, "dataset_size": 491494036.0}}
|
2023-12-08T14:58:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pxcorpus"
More Information needed
|
[
"# Dataset Card for \"pxcorpus\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pxcorpus\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pxcorpus\"\n\nMore Information needed"
] |
c803f898e9427ba0257a9d98e251205d069d9386
|
# Dataset Card for "next-dataset-refined-batch-4000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
swaroopajit/next-dataset-refined-batch-4000
|
[
"region:us"
] |
2023-09-21T10:17:44+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 316595519.0, "num_examples": 1000}], "download_size": 289227918, "dataset_size": 316595519.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T10:19:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "next-dataset-refined-batch-4000"
More Information needed
|
[
"# Dataset Card for \"next-dataset-refined-batch-4000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"next-dataset-refined-batch-4000\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"next-dataset-refined-batch-4000\"\n\nMore Information needed"
] |
808f5508bf72dece0249f5ad3f0426c6a49f0bc5
|
# Dataset Card for "tokenized_chitanka"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mor40/tokenized_chitanka
|
[
"region:us"
] |
2023-09-21T10:24:36+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 3200443200, "num_examples": 889012}], "download_size": 1005331841, "dataset_size": 3200443200}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T10:25:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "tokenized_chitanka"
More Information needed
|
[
"# Dataset Card for \"tokenized_chitanka\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"tokenized_chitanka\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"tokenized_chitanka\"\n\nMore Information needed"
] |
d3e098828277176e7beafd4a1cf15a7ef451edab
|
# Dataset Card for "next-dataset-refined-batch-5000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
swaroopajit/next-dataset-refined-batch-5000
|
[
"region:us"
] |
2023-09-21T10:38:56+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 307226208.0, "num_examples": 1000}], "download_size": 278805299, "dataset_size": 307226208.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T10:40:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "next-dataset-refined-batch-5000"
More Information needed
|
[
"# Dataset Card for \"next-dataset-refined-batch-5000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"next-dataset-refined-batch-5000\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"next-dataset-refined-batch-5000\"\n\nMore Information needed"
] |
7b0d41faef3172a5afaf486f789c4f55ff8176d7
|
# Dataset Card for "parallel_azeri_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
language-ml-lab/parallel_azeri_dataset
|
[
"region:us"
] |
2023-09-21T10:43:10+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "korughlu", "path": "data/korughlu-*"}, {"split": "azeri_dictionary", "path": "data/azeri_dictionary-*"}]}], "dataset_info": {"features": [{"name": "persian", "dtype": "string"}, {"name": "latin", "dtype": "string"}], "splits": [{"name": "korughlu", "num_bytes": 8887, "num_examples": 339}, {"name": "azeri_dictionary", "num_bytes": 2156187, "num_examples": 66732}], "download_size": 1201060, "dataset_size": 2165074}}
|
2023-09-21T13:04:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "parallel_azeri_dataset"
More Information needed
|
[
"# Dataset Card for \"parallel_azeri_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"parallel_azeri_dataset\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"parallel_azeri_dataset\"\n\nMore Information needed"
] |
1bedd46e4ceffd3bd24ebceedb1254171f4242fd
|
# Dataset Card for "20000sample_COT"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DopeorNope/20000sample_COT
|
[
"region:us"
] |
2023-09-21T10:49:05+00:00
|
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "rationale", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23066106, "num_examples": 21297}], "download_size": 9606299, "dataset_size": 23066106}}
|
2023-09-21T10:57:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "20000sample_COT"
More Information needed
|
[
"# Dataset Card for \"20000sample_COT\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"20000sample_COT\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"20000sample_COT\"\n\nMore Information needed"
] |
94e86a8605b600ac6866fbb704c747a355772ac1
|
# Dataset Card for "next-dataset-refined-batch-6000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
swaroopajit/next-dataset-refined-batch-6000
|
[
"region:us"
] |
2023-09-21T10:59:55+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 315307268.0, "num_examples": 999}], "download_size": 288501432, "dataset_size": 315307268.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T11:01:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "next-dataset-refined-batch-6000"
More Information needed
|
[
"# Dataset Card for \"next-dataset-refined-batch-6000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"next-dataset-refined-batch-6000\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"next-dataset-refined-batch-6000\"\n\nMore Information needed"
] |
efcc151cce6d8be19b15c79cbb4cba4694f91db0
|
# Dataset Card for "2000sample_COT"
# DopeorNope/Eng_Kor_COT_combined
- KOpen-platypus + DopeorNope/2000sample_COT
- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다😭😭
- 고품질 한국어 데이터셋 + COT 방식으로 구성한 영어+ 한국어 dataset구성
---
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DopeorNope/2000sample_COT
|
[
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2023-09-21T11:01:52+00:00
|
{"license": "cc-by-nc-sa-4.0", "dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "rationale", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2298020, "num_examples": 2159}], "download_size": 1099835, "dataset_size": 2298020}}
|
2023-10-19T14:37:10+00:00
|
[] |
[] |
TAGS
#license-cc-by-nc-sa-4.0 #region-us
|
# Dataset Card for "2000sample_COT"
# DopeorNope/Eng_Kor_COT_combined
- KOpen-platypus + DopeorNope/2000sample_COT
- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다
- 고품질 한국어 데이터셋 + COT 방식으로 구성한 영어+ 한국어 dataset구성
---
More Information needed
|
[
"# Dataset Card for \"2000sample_COT\"",
"# DopeorNope/Eng_Kor_COT_combined\n- KOpen-platypus + DopeorNope/2000sample_COT\n\n- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다\n\n- 고품질 한국어 데이터셋 + COT 방식으로 구성한 영어+ 한국어 dataset구성\n\n---\n\nMore Information needed"
] |
[
"TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n",
"# Dataset Card for \"2000sample_COT\"",
"# DopeorNope/Eng_Kor_COT_combined\n- KOpen-platypus + DopeorNope/2000sample_COT\n\n- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다\n\n- 고품질 한국어 데이터셋 + COT 방식으로 구성한 영어+ 한국어 dataset구성\n\n---\n\nMore Information needed"
] |
[
19,
13,
90
] |
[
"passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n# Dataset Card for \"2000sample_COT\"# DopeorNope/Eng_Kor_COT_combined\n- KOpen-platypus + DopeorNope/2000sample_COT\n\n- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다\n\n- 고품질 한국어 데이터셋 + COT 방식으로 구성한 영어+ 한국어 dataset구성\n\n---\n\nMore Information needed"
] |
8cc5cc4d05008e3411941212adf60ce5df596546
|
# Dataset Card for LAION-EO
## Dataset Description
- **Point of Contact:** Mikolaj Czerkawski, [email protected]
### Dataset Summary
This dataset contains a subset of LAION-5B containing images that are likely to be satellite images. The procedure of acquiring and filtering the dataset has been described in https://arxiv.org/abs/2309.15535.
## Dataset Structure
Each version of the dataset contains a .csv file with metadata with urls to images, which can be easily filtered. Note that the linked images could be copyrighted.
### Data Fields
|Field|Description|
|:---|:---|
|**source**| Index of the anchor sample |
|**url**| Link to the image |
|**filename**| Locally saved unique filename |
|**id**| Original ID |
|**fast_similarity**| Fast similarity to the anchor image computed with https://github.com/rom1504/clip-retrieval |
|**caption**| Text caption |
|**image_similarity**| CLIP similarity to the original anchor image |
|**text_similarity**| CLIP similarity to the text "a satellite image" |
|**height**| height of the image at url |
|**width**| Width of the image at url |
|**lang**| Language predicted using https://huggingface.co/papluca/xlm-roberta-base-language-detection |
|**lang_score**| A measure of confidence in the predicted language |
### Example Samples

### Data Splits
No official splitting of the dataset is used.
## Dataset Creation
The creation of the prototype version is described in (TBC).
### Curation Rationale
Extraction of samples in LAION-5B relevant to Earth observation tasks.
### Source Data
Samples from the existing LAION-5B dataset (https://laion.ai/blog/laion-5b/).
### Discussion of Biases
Only contains satellite images openly uploaded online, which introduces a heavy bias towards satellite images used for communicating ideas on the internet.
### Citation Information
The workshop paper presented at the DataComp workshop during ICCV 2023 is available at https://arxiv.org/abs/2309.15535.
```latex
@inproceedings{LAION_EO,
title={From LAION-5B to LAION-EO: Filtering Billions of Images Using Anchor Datasets for Satellite Image Extraction},
author={Mikolaj Czerkawski and Alistair Francis},
year={2023},
eprint={2309.15535},
archivePrefix={arXiv},
primaryClass={cs.CV}
booktitle = {"Towards the Next Generation of Computer Vision Datasets: DataComp Track" Workshop at the IEEE/CVF International Conference on Computer Vision (ICCV)}
}
```
### License
We distribute the metadata dataset (the parquet files) under the Creative Common CC-BY 4.0 license, which poses no particular restriction. The images are under their copyright.
### Contributions
Design and Curation: Mikolaj Czerkawski
|
mikonvergence/LAION-EO
|
[
"task_categories:text-to-image",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"climate",
"arxiv:2309.15535",
"region:us"
] |
2023-09-21T11:09:12+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-to-image"], "tags": ["climate"]}
|
2023-09-28T02:55:45+00:00
|
[
"2309.15535"
] |
[
"en"
] |
TAGS
#task_categories-text-to-image #size_categories-100K<n<1M #language-English #license-cc-by-4.0 #climate #arxiv-2309.15535 #region-us
|
Dataset Card for LAION-EO
=========================
Dataset Description
-------------------
* Point of Contact: Mikolaj Czerkawski, mikolaj.czerkawski@URL
### Dataset Summary
This dataset contains a subset of LAION-5B containing images that are likely to be satellite images. The procedure of acquiring and filtering the dataset has been described in URL
Dataset Structure
-----------------
Each version of the dataset contains a .csv file with metadata with urls to images, which can be easily filtered. Note that the linked images could be copyrighted.
### Data Fields
### Example Samples

### Data Splits
No official splitting of the dataset is used.
Dataset Creation
----------------
The creation of the prototype version is described in (TBC).
### Curation Rationale
Extraction of samples in LAION-5B relevant to Earth observation tasks.
### Source Data
Samples from the existing LAION-5B dataset (URL
### Discussion of Biases
Only contains satellite images openly uploaded online, which introduces a heavy bias towards satellite images used for communicating ideas on the internet.
The workshop paper presented at the DataComp workshop during ICCV 2023 is available at URL
### License
We distribute the metadata dataset (the parquet files) under the Creative Common CC-BY 4.0 license, which poses no particular restriction. The images are under their copyright.
### Contributions
Design and Curation: Mikolaj Czerkawski
|
[
"### Dataset Summary\n\n\nThis dataset contains a subset of LAION-5B containing images that are likely to be satellite images. The procedure of acquiring and filtering the dataset has been described in URL\n\n\nDataset Structure\n-----------------\n\n\nEach version of the dataset contains a .csv file with metadata with urls to images, which can be easily filtered. Note that the linked images could be copyrighted.",
"### Data Fields",
"### Example Samples\n\n\n",
"### Data Splits\n\n\nNo official splitting of the dataset is used.\n\n\nDataset Creation\n----------------\n\n\nThe creation of the prototype version is described in (TBC).",
"### Curation Rationale\n\n\nExtraction of samples in LAION-5B relevant to Earth observation tasks.",
"### Source Data\n\n\nSamples from the existing LAION-5B dataset (URL",
"### Discussion of Biases\n\n\nOnly contains satellite images openly uploaded online, which introduces a heavy bias towards satellite images used for communicating ideas on the internet.\n\n\nThe workshop paper presented at the DataComp workshop during ICCV 2023 is available at URL",
"### License\n\n\nWe distribute the metadata dataset (the parquet files) under the Creative Common CC-BY 4.0 license, which poses no particular restriction. The images are under their copyright.",
"### Contributions\n\n\nDesign and Curation: Mikolaj Czerkawski"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-100K<n<1M #language-English #license-cc-by-4.0 #climate #arxiv-2309.15535 #region-us \n",
"### Dataset Summary\n\n\nThis dataset contains a subset of LAION-5B containing images that are likely to be satellite images. The procedure of acquiring and filtering the dataset has been described in URL\n\n\nDataset Structure\n-----------------\n\n\nEach version of the dataset contains a .csv file with metadata with urls to images, which can be easily filtered. Note that the linked images could be copyrighted.",
"### Data Fields",
"### Example Samples\n\n\n",
"### Data Splits\n\n\nNo official splitting of the dataset is used.\n\n\nDataset Creation\n----------------\n\n\nThe creation of the prototype version is described in (TBC).",
"### Curation Rationale\n\n\nExtraction of samples in LAION-5B relevant to Earth observation tasks.",
"### Source Data\n\n\nSamples from the existing LAION-5B dataset (URL",
"### Discussion of Biases\n\n\nOnly contains satellite images openly uploaded online, which introduces a heavy bias towards satellite images used for communicating ideas on the internet.\n\n\nThe workshop paper presented at the DataComp workshop during ICCV 2023 is available at URL",
"### License\n\n\nWe distribute the metadata dataset (the parquet files) under the Creative Common CC-BY 4.0 license, which poses no particular restriction. The images are under their copyright.",
"### Contributions\n\n\nDesign and Curation: Mikolaj Czerkawski"
] |
[
55,
97,
5,
17,
36,
24,
17,
59,
40,
16
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-100K<n<1M #language-English #license-cc-by-4.0 #climate #arxiv-2309.15535 #region-us \n### Dataset Summary\n\n\nThis dataset contains a subset of LAION-5B containing images that are likely to be satellite images. The procedure of acquiring and filtering the dataset has been described in URL\n\n\nDataset Structure\n-----------------\n\n\nEach version of the dataset contains a .csv file with metadata with urls to images, which can be easily filtered. Note that the linked images could be copyrighted.### Data Fields### Example Samples\n\n\n### Data Splits\n\n\nNo official splitting of the dataset is used.\n\n\nDataset Creation\n----------------\n\n\nThe creation of the prototype version is described in (TBC).### Curation Rationale\n\n\nExtraction of samples in LAION-5B relevant to Earth observation tasks.### Source Data\n\n\nSamples from the existing LAION-5B dataset (URL### Discussion of Biases\n\n\nOnly contains satellite images openly uploaded online, which introduces a heavy bias towards satellite images used for communicating ideas on the internet.\n\n\nThe workshop paper presented at the DataComp workshop during ICCV 2023 is available at URL### License\n\n\nWe distribute the metadata dataset (the parquet files) under the Creative Common CC-BY 4.0 license, which poses no particular restriction. The images are under their copyright.### Contributions\n\n\nDesign and Curation: Mikolaj Czerkawski"
] |
bc6f1697396ec1596e1cb460594175daea5ea18b
|
# Dataset Card for "next-dataset-refined-batch-7000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
swaroopajit/next-dataset-refined-batch-7000
|
[
"region:us"
] |
2023-09-21T11:18:08+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 320953791.0, "num_examples": 1000}], "download_size": 294115368, "dataset_size": 320953791.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T11:20:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "next-dataset-refined-batch-7000"
More Information needed
|
[
"# Dataset Card for \"next-dataset-refined-batch-7000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"next-dataset-refined-batch-7000\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"next-dataset-refined-batch-7000\"\n\nMore Information needed"
] |
431683876006e445ffaa05f3fcaa46beb6fa8868
|
# Dataset Card for "bus_few4_128x"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FanChen0116/bus_few4_128x
|
[
"region:us"
] |
2023-09-21T11:32:08+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "O", "1": "I-from_location", "2": "B-from_location", "3": "B-leaving_date", "4": "I-leaving_date", "5": "I-to_location", "6": "B-to_location"}}}}, {"name": "request_slot", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 1752765, "num_examples": 8960}, {"name": "validation", "num_bytes": 6900, "num_examples": 35}, {"name": "test", "num_bytes": 70618, "num_examples": 377}], "download_size": 0, "dataset_size": 1830283}}
|
2023-09-23T15:57:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bus_few4_128x"
More Information needed
|
[
"# Dataset Card for \"bus_few4_128x\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bus_few4_128x\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bus_few4_128x\"\n\nMore Information needed"
] |
92d1b244fdde7f09bba18f0a525a6cad5e835434
|
# Dataset Card for "bus_few4_128x_empty"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FanChen0116/bus_few4_128x_empty
|
[
"region:us"
] |
2023-09-21T11:32:23+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "O", "1": "I-from_location", "2": "B-from_location", "3": "B-leaving_date", "4": "I-leaving_date", "5": "I-to_location", "6": "B-to_location"}}}}, {"name": "request_slot", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 1560019, "num_examples": 8960}, {"name": "validation", "num_bytes": 6128, "num_examples": 35}, {"name": "test", "num_bytes": 70618, "num_examples": 377}], "download_size": 0, "dataset_size": 1636765}}
|
2023-09-23T15:57:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bus_few4_128x_empty"
More Information needed
|
[
"# Dataset Card for \"bus_few4_128x_empty\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bus_few4_128x_empty\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bus_few4_128x_empty\"\n\nMore Information needed"
] |
cf850770ebc1454ef5fbe62780c92ab4d8442f87
|
# Dataset Card for "bus_few4_128x_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FanChen0116/bus_few4_128x_pvi
|
[
"region:us"
] |
2023-09-21T11:35:36+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "O", "1": "I-from_location", "2": "B-from_location", "3": "B-leaving_date", "4": "I-leaving_date", "5": "I-to_location", "6": "B-to_location"}}}}, {"name": "request_slot", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 502551, "num_examples": 3628}, {"name": "validation", "num_bytes": 6900, "num_examples": 35}, {"name": "test", "num_bytes": 70618, "num_examples": 377}], "download_size": 0, "dataset_size": 580069}}
|
2023-12-28T09:07:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bus_few4_128x_pvi"
More Information needed
|
[
"# Dataset Card for \"bus_few4_128x_pvi\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bus_few4_128x_pvi\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bus_few4_128x_pvi\"\n\nMore Information needed"
] |
a7633ea66a442cdc3aece23cff55e81b68800a23
|
# Dataset Card for "next-dataset-refined-batch-8000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
swaroopajit/next-dataset-refined-batch-8000
|
[
"region:us"
] |
2023-09-21T11:39:41+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 309516962.0, "num_examples": 1000}], "download_size": 282053182, "dataset_size": 309516962.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T11:41:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "next-dataset-refined-batch-8000"
More Information needed
|
[
"# Dataset Card for \"next-dataset-refined-batch-8000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"next-dataset-refined-batch-8000\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"next-dataset-refined-batch-8000\"\n\nMore Information needed"
] |
16accd0f457e2791cc41b625e7718340bcffd969
|
# English French Webpages Scraped Translated
### Dataset Summary
French/English parallel texts for training translation models. Over 17.1 million sentences in French and English. Dataset created by Chris Callison-Burch, who crawled millions of web pages and then used a set of simple heuristics to transform French URLs onto English URLs, and assumed that these documents are translations of each other. This is the main dataset of Workshop on Statistical Machine Translation (WML) 2015 Dataset that can be used for Machine Translation and Language Models. Refer to the paper here: http://www.statmt.org/wmt15/pdf/WMT01.pdf
### Post-process
This dataset has been post-processed to remove all duplicates, empty fields and phrases containing less than 5 words.
### Original Dataset Citation
```
@InProceedings{bojar-EtAl:2015:WMT,
author = {Bojar, Ond\v{r}ej and Chatterjee, Rajen and Federmann, Christian and Haddow, Barry and Huck, Matthias and Hokamp, Chris and Koehn, Philipp and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Scarton, Carolina and Specia, Lucia and Turchi, Marco},
title = {Findings of the 2015 Workshop on Statistical Machine Translation},
booktitle = {Proceedings of the Tenth Workshop on Statistical Machine Translation},
month = {September},
year = {2015},
address = {Lisbon, Portugal},
publisher = {Association for Computational Linguistics},
pages = {1--46},
url = {http://aclweb.org/anthology/W15-3001}
}
```
|
Nicolas-BZRD/English_French_Webpages_Scraped_Translated
|
[
"task_categories:translation",
"size_categories:10M<n<100M",
"language:en",
"language:fr",
"license:odbl",
"webpages",
"parallel",
"parallel data",
"region:us"
] |
2023-09-21T11:54:23+00:00
|
{"language": ["en", "fr"], "license": "odbl", "size_categories": ["10M<n<100M"], "task_categories": ["translation"], "tags": ["webpages", "parallel", "parallel data"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "fr", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6811772380, "num_examples": 17161263}], "download_size": 640497280, "dataset_size": 6811772380}}
|
2023-09-21T13:29:04+00:00
|
[] |
[
"en",
"fr"
] |
TAGS
#task_categories-translation #size_categories-10M<n<100M #language-English #language-French #license-odbl #webpages #parallel #parallel data #region-us
|
# English French Webpages Scraped Translated
### Dataset Summary
French/English parallel texts for training translation models. Over 17.1 million sentences in French and English. Dataset created by Chris Callison-Burch, who crawled millions of web pages and then used a set of simple heuristics to transform French URLs onto English URLs, and assumed that these documents are translations of each other. This is the main dataset of Workshop on Statistical Machine Translation (WML) 2015 Dataset that can be used for Machine Translation and Language Models. Refer to the paper here: URL
### Post-process
This dataset has been post-processed to remove all duplicates, empty fields and phrases containing less than 5 words.
### Original Dataset Citation
|
[
"# English French Webpages Scraped Translated",
"### Dataset Summary\nFrench/English parallel texts for training translation models. Over 17.1 million sentences in French and English. Dataset created by Chris Callison-Burch, who crawled millions of web pages and then used a set of simple heuristics to transform French URLs onto English URLs, and assumed that these documents are translations of each other. This is the main dataset of Workshop on Statistical Machine Translation (WML) 2015 Dataset that can be used for Machine Translation and Language Models. Refer to the paper here: URL",
"### Post-process\nThis dataset has been post-processed to remove all duplicates, empty fields and phrases containing less than 5 words.",
"### Original Dataset Citation"
] |
[
"TAGS\n#task_categories-translation #size_categories-10M<n<100M #language-English #language-French #license-odbl #webpages #parallel #parallel data #region-us \n",
"# English French Webpages Scraped Translated",
"### Dataset Summary\nFrench/English parallel texts for training translation models. Over 17.1 million sentences in French and English. Dataset created by Chris Callison-Burch, who crawled millions of web pages and then used a set of simple heuristics to transform French URLs onto English URLs, and assumed that these documents are translations of each other. This is the main dataset of Workshop on Statistical Machine Translation (WML) 2015 Dataset that can be used for Machine Translation and Language Models. Refer to the paper here: URL",
"### Post-process\nThis dataset has been post-processed to remove all duplicates, empty fields and phrases containing less than 5 words.",
"### Original Dataset Citation"
] |
[
56,
11,
119,
33,
7
] |
[
"passage: TAGS\n#task_categories-translation #size_categories-10M<n<100M #language-English #language-French #license-odbl #webpages #parallel #parallel data #region-us \n# English French Webpages Scraped Translated### Dataset Summary\nFrench/English parallel texts for training translation models. Over 17.1 million sentences in French and English. Dataset created by Chris Callison-Burch, who crawled millions of web pages and then used a set of simple heuristics to transform French URLs onto English URLs, and assumed that these documents are translations of each other. This is the main dataset of Workshop on Statistical Machine Translation (WML) 2015 Dataset that can be used for Machine Translation and Language Models. Refer to the paper here: URL### Post-process\nThis dataset has been post-processed to remove all duplicates, empty fields and phrases containing less than 5 words.### Original Dataset Citation"
] |
28d04641bfc34cb312c3fcb99c06836adfe5ec1f
|
# Dataset Card for "grundfunktionen-50-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/grundfunktionen-50-undersampled
|
[
"region:us"
] |
2023-09-21T12:00:46+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AndereAnlagen", "1": "Befoerdern", "2": "KaelteVersorgen", "3": "LuftVersorgen", "4": "MedienVersorgen", "5": "Sichern", "6": "StromVersorgen", "7": "WaermeVersorgen"}}}}], "splits": [{"name": "train", "num_bytes": 63763.936884264804, "num_examples": 362}, {"name": "test", "num_bytes": 952887, "num_examples": 5431}, {"name": "valid", "num_bytes": 952887, "num_examples": 5431}], "download_size": 870993, "dataset_size": 1969537.936884265}}
|
2023-09-21T12:00:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "grundfunktionen-50-undersampled"
More Information needed
|
[
"# Dataset Card for \"grundfunktionen-50-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"grundfunktionen-50-undersampled\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"grundfunktionen-50-undersampled\"\n\nMore Information needed"
] |
67bb7e91f15b70a5d34abe4290f600b6b0eb4843
|
# Dataset Card for "grundfunktionen-100-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/grundfunktionen-100-undersampled
|
[
"region:us"
] |
2023-09-21T12:00:49+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AndereAnlagen", "1": "Befoerdern", "2": "KaelteVersorgen", "3": "LuftVersorgen", "4": "MedienVersorgen", "5": "Sichern", "6": "StromVersorgen", "7": "WaermeVersorgen"}}}}], "splits": [{"name": "train", "num_bytes": 125414.15210385784, "num_examples": 712}, {"name": "test", "num_bytes": 952887, "num_examples": 5431}, {"name": "valid", "num_bytes": 952887, "num_examples": 5431}], "download_size": 893763, "dataset_size": 2031188.1521038578}}
|
2023-09-21T12:00:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "grundfunktionen-100-undersampled"
More Information needed
|
[
"# Dataset Card for \"grundfunktionen-100-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"grundfunktionen-100-undersampled\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"grundfunktionen-100-undersampled\"\n\nMore Information needed"
] |
df8c39e74326d76c8ed29fe765864907cf9eb3f2
|
# Dataset Card for "grundfunktionen-200-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/grundfunktionen-200-undersampled
|
[
"region:us"
] |
2023-09-21T12:00:52+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AndereAnlagen", "1": "Befoerdern", "2": "KaelteVersorgen", "3": "LuftVersorgen", "4": "MedienVersorgen", "5": "Sichern", "6": "StromVersorgen", "7": "WaermeVersorgen"}}}}], "splits": [{"name": "train", "num_bytes": 248714.5825430439, "num_examples": 1412}, {"name": "test", "num_bytes": 952887, "num_examples": 5431}, {"name": "valid", "num_bytes": 952887, "num_examples": 5431}], "download_size": 945049, "dataset_size": 2154488.582543044}}
|
2023-09-21T12:00:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "grundfunktionen-200-undersampled"
More Information needed
|
[
"# Dataset Card for \"grundfunktionen-200-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"grundfunktionen-200-undersampled\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"grundfunktionen-200-undersampled\"\n\nMore Information needed"
] |
559a3180f19fe9ab8da8245d485278ea589f7140
|
# Dataset Card for "soict_train_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thanhduycao/soict_train_dataset
|
[
"region:us"
] |
2023-09-21T12:04:13+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "sentence_annotation", "dtype": "string"}, {"name": "entities", "list": [{"name": "type", "dtype": "string"}, {"name": "filler", "dtype": "string"}]}, {"name": "file", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "origin_transcription", "dtype": "string"}, {"name": "sentence_norm", "dtype": "string"}, {"name": "sentence_norm_v2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3484626224, "num_examples": 6729}, {"name": "test", "num_bytes": 390303091, "num_examples": 748}], "download_size": 918877822, "dataset_size": 3874929315}}
|
2023-09-21T14:05:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "soict_train_dataset"
More Information needed
|
[
"# Dataset Card for \"soict_train_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"soict_train_dataset\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"soict_train_dataset\"\n\nMore Information needed"
] |
694716f554f860dc4c42e7924b8d61f695e36ba7
|
# Dataset Card for "next-dataset-refined-batch-9000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
swaroopajit/next-dataset-refined-batch-9000
|
[
"region:us"
] |
2023-09-21T12:06:28+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 335325495.0, "num_examples": 1000}], "download_size": 309863965, "dataset_size": 335325495.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T12:08:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "next-dataset-refined-batch-9000"
More Information needed
|
[
"# Dataset Card for \"next-dataset-refined-batch-9000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"next-dataset-refined-batch-9000\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"next-dataset-refined-batch-9000\"\n\nMore Information needed"
] |
440ab7bd7109d6b52ce2628ad1ffe034c5e09c33
|
# Dataset Card for "wikitext_de_document_level_v01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jphme/wikitext_de_document_level_v01
|
[
"region:us"
] |
2023-09-21T12:08:48+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1860002, "num_examples": 200}], "download_size": 1138143, "dataset_size": 1860002}}
|
2023-09-21T12:08:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wikitext_de_document_level_v01"
More Information needed
|
[
"# Dataset Card for \"wikitext_de_document_level_v01\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wikitext_de_document_level_v01\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wikitext_de_document_level_v01\"\n\nMore Information needed"
] |
5d47aa6c77526151528b6b7edc796e95c429f707
|
# Dogs Video Object Tracking Dataset
The dataset contains frames extracted from videos with dogs on the streets. Each frame is accompanied by **bounding box** that specifically **tracks the dog** in the image.
The dataset provides a valuable resource for advancing computer vision tasks, enabling the development of more accurate and effective solutions for monitoring and understanding dog behavior in urban settings.

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=dogs-video-object-tracking-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Dataset structure
The dataset consists of 3 folders with frames from the video with dogs on the streets.
Each folder includes:
- **images**: folder with original frames from the video,
- **boxes**: visualized data labeling for the images in the previous folder,
- **.csv file**: file with id and path of each frame in the "images" folder,
- **annotations.xml**: contains coordinates of the bounding boxes, created for the original frames
# Data Format
Each frame from `images` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the bounding boxes for dogs tracking. For each point, the x and y coordinates are provided.
# Example of the XML-file
.png?generation=1695994709378514&alt=media)
# Object tracking might be made in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=dogs-video-object-tracking-dataset)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
|
TrainingDataPro/dogs-video-object-tracking-dataset
|
[
"task_categories:image-to-image",
"task_categories:object-detection",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"biology",
"region:us"
] |
2023-09-21T12:27:45+00:00
|
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["image-to-image", "object-detection"], "tags": ["code", "biology"], "dataset_info": [{"config_name": "video_01", "features": [{"name": "id", "dtype": "int32"}, {"name": "name", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "shapes", "sequence": [{"name": "track_id", "dtype": "uint32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "dog"}}}}, {"name": "type", "dtype": "string"}, {"name": "points", "sequence": {"sequence": "float32"}}, {"name": "rotation", "dtype": "float32"}, {"name": "occluded", "dtype": "uint8"}, {"name": "attributes", "sequence": [{"name": "name", "dtype": "string"}, {"name": "text", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 14990, "num_examples": 52}], "download_size": 313328015, "dataset_size": 14990}, {"config_name": "video_02", "features": [{"name": "id", "dtype": "int32"}, {"name": "name", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "shapes", "sequence": [{"name": "track_id", "dtype": "uint32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "dog"}}}}, {"name": "type", "dtype": "string"}, {"name": "points", "sequence": {"sequence": "float32"}}, {"name": "rotation", "dtype": "float32"}, {"name": "occluded", "dtype": "uint8"}, {"name": "attributes", "sequence": [{"name": "name", "dtype": "string"}, {"name": "text", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 19600, "num_examples": 58}], "download_size": 67354761, "dataset_size": 19600}, {"config_name": "video_03", "features": [{"name": "id", "dtype": "int32"}, {"name": "name", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "shapes", "sequence": [{"name": "track_id", "dtype": "uint32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "dog"}}}}, {"name": "type", "dtype": "string"}, {"name": "points", "sequence": {"sequence": "float32"}}, {"name": "rotation", "dtype": "float32"}, {"name": "occluded", "dtype": "uint8"}, {"name": "attributes", "sequence": [{"name": "name", "dtype": "string"}, {"name": "text", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 14126, "num_examples": 49}], "download_size": 148412090, "dataset_size": 14126}]}
|
2023-10-09T08:43:57+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-image-to-image #task_categories-object-detection #language-English #license-cc-by-nc-nd-4.0 #code #biology #region-us
|
# Dogs Video Object Tracking Dataset
The dataset contains frames extracted from videos with dogs on the streets. Each frame is accompanied by bounding box that specifically tracks the dog in the image.
The dataset provides a valuable resource for advancing computer vision tasks, enabling the development of more accurate and effective solutions for monitoring and understanding dog behavior in urban settings.

|
swaroopajit/next-dataset-refined-batch-10000
|
[
"region:us"
] |
2023-09-21T12:35:47+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 326024308.0, "num_examples": 1000}], "download_size": 299977034, "dataset_size": 326024308.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T12:37:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "next-dataset-refined-batch-10000"
More Information needed
|
[
"# Dataset Card for \"next-dataset-refined-batch-10000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"next-dataset-refined-batch-10000\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"next-dataset-refined-batch-10000\"\n\nMore Information needed"
] |
7ec365e48fb22bbc2ff5110c38a10b178e429b83
|
# Dataset Card for "brain-tumor-object-detection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mmenendezg/brain-tumor-object-detection
|
[
"region:us"
] |
2023-09-21T12:45:24+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "objects", "struct": [{"name": "area", "sequence": "int64"}, {"name": "bbox", "sequence": {"sequence": "int64"}}, {"name": "id", "sequence": "int64"}, {"name": "iscrowd", "sequence": "int64"}, {"name": "label", "sequence": "float64"}]}], "splits": [{"name": "train", "num_bytes": 21560470.835990887, "num_examples": 614}, {"name": "validation", "num_bytes": 9270300.164009111, "num_examples": 264}, {"name": "test", "num_bytes": 7552385.0, "num_examples": 223}], "download_size": 30702966, "dataset_size": 38383156.0}}
|
2023-10-03T21:48:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "brain-tumor-object-detection"
More Information needed
|
[
"# Dataset Card for \"brain-tumor-object-detection\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"brain-tumor-object-detection\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"brain-tumor-object-detection\"\n\nMore Information needed"
] |
aab5ec740763d45c79404cb727098f6acea59bd7
|
# Dataset Card for "next-dataset-refined-batch-11000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
swaroopajit/next-dataset-refined-batch-11000
|
[
"region:us"
] |
2023-09-21T13:08:59+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 315028784.0, "num_examples": 1000}], "download_size": 287078371, "dataset_size": 315028784.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T13:10:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "next-dataset-refined-batch-11000"
More Information needed
|
[
"# Dataset Card for \"next-dataset-refined-batch-11000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"next-dataset-refined-batch-11000\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"next-dataset-refined-batch-11000\"\n\nMore Information needed"
] |
053f2d382fb0498eadc235648c2543697c10dfe1
|
This is a sample of GitHub commits and files reconstructed from the Software Heritage dataset.
It contains the latest 1024 commits in all of `pytorch/*` and `huggingface/*` repos.
The tables are split to avoid an explosion of rows (lots of repeated files between commits), so you will need to pre-filter the commits before adding the file contents.
Table descriptions:
## 1. `commits`
The commit message table. Join it with `commit_filepath` on `commits.directory_id == commit_filepath.directory_id` and `commits.parent_directory_id == commit_filepath.directory_id`.
* origin - str (repo url, example: `https://github.com/huggingface/datasets`)
* full_name - str (repo name, example: `huggingface/datasets`)
* commit_id - str (example: `56b114ebfd5399252dc23f9df207f87c5397b50a`)
* parent_commit_id - str (previous commit id, example: `8c826fb80f7f8135f6e632d34c8f59134f5983c8`)
* snapshot_id - str (Software Heritage snapshot id, example: `d76232879a5912b1eaca91e8889863117bca66a4`)
* visit_date - datetime[ns] (Software Heritage crawler visit date, example: `2022-11-28 13:03:09.100114`)
* branch_name - str (example: `refs/heads/main`)
* revision_date - datetime[ns] (commit date, example: `2022-04-27 17:30:41`)
* committer_date - datetime[ns] (commit date, example: `2022-04-27 17:30:41`)
* author - binary (Software Heritage anonymized commit's author name)
* message - str (commit message, example: `update auth when mirroring datasets on the hub (#4242)`)
* directory_id - str (root directory id to join this table with the files, example: `84c6cc5b2c156ed3251674c43dd411d731183bb3`)
* parent_directory_id - str (parent commit's root directory id, example: `e927ce1cdecf6286f7e23204ed656373c9921f89`)
## 2. `commit_filepath`
The file paths associated with each commit. Join it with `file_conents` on `blob_id`
* directory_id - str (root directory id of the corresponding commit, example: `001331910958befd665d94c85c23471a8fc1ab19`)
* blob_id - str (Software Heritage file blob id, example: `47953673b7b51c2585402a91d434f5fe4d9dc105`)
* content_id - str (Software Heritage content id, example: `d6b3dab547a59efe5246edf06a42e8e85776acb1`)
* path - str (file path inside the repository, example: ` /core/src/components/tab-bar/usage/javascript.md`)
* length - i64 (file length in bytes, example: `529`)
## 3. `file_contents`
The contents of all non-binary files (i.e. not images/media/data).
* blob_id - str (Software Heritage file blob id, example: `47953673b7b51c2585402a91d434f5fe4d9dc105`)
* content - str (file contents, always a UTF-8 string)
* src_encoding - str (original file's encoding, example: `UTF-8`)
language - str (programming language label, example: `Python`)
is_vendor - bool (True if it's a vendor file, e.g. 3rd party library)
is_generated - bool (True if the file is auto-generated)
|
bigcode/commits_sample_files
|
[
"region:us"
] |
2023-09-21T13:10:03+00:00
|
{}
|
2023-09-22T12:22:11+00:00
|
[] |
[] |
TAGS
#region-us
|
This is a sample of GitHub commits and files reconstructed from the Software Heritage dataset.
It contains the latest 1024 commits in all of 'pytorch/*' and 'huggingface/*' repos.
The tables are split to avoid an explosion of rows (lots of repeated files between commits), so you will need to pre-filter the commits before adding the file contents.
Table descriptions:
## 1. 'commits'
The commit message table. Join it with 'commit_filepath' on 'commits.directory_id == commit_filepath.directory_id' and 'commits.parent_directory_id == commit_filepath.directory_id'.
* origin - str (repo url, example: 'URL
* full_name - str (repo name, example: 'huggingface/datasets')
* commit_id - str (example: '56b114ebfd5399252dc23f9df207f87c5397b50a')
* parent_commit_id - str (previous commit id, example: '8c826fb80f7f8135f6e632d34c8f59134f5983c8')
* snapshot_id - str (Software Heritage snapshot id, example: 'd76232879a5912b1eaca91e8889863117bca66a4')
* visit_date - datetime[ns] (Software Heritage crawler visit date, example: '2022-11-28 13:03:09.100114')
* branch_name - str (example: 'refs/heads/main')
* revision_date - datetime[ns] (commit date, example: '2022-04-27 17:30:41')
* committer_date - datetime[ns] (commit date, example: '2022-04-27 17:30:41')
* author - binary (Software Heritage anonymized commit's author name)
* message - str (commit message, example: 'update auth when mirroring datasets on the hub (#4242)')
* directory_id - str (root directory id to join this table with the files, example: '84c6cc5b2c156ed3251674c43dd411d731183bb3')
* parent_directory_id - str (parent commit's root directory id, example: 'e927ce1cdecf6286f7e23204ed656373c9921f89')
## 2. 'commit_filepath'
The file paths associated with each commit. Join it with 'file_conents' on 'blob_id'
* directory_id - str (root directory id of the corresponding commit, example: '001331910958befd665d94c85c23471a8fc1ab19')
* blob_id - str (Software Heritage file blob id, example: '47953673b7b51c2585402a91d434f5fe4d9dc105')
* content_id - str (Software Heritage content id, example: 'd6b3dab547a59efe5246edf06a42e8e85776acb1')
* path - str (file path inside the repository, example: ' /core/src/components/tab-bar/usage/URL')
* length - i64 (file length in bytes, example: '529')
## 3. 'file_contents'
The contents of all non-binary files (i.e. not images/media/data).
* blob_id - str (Software Heritage file blob id, example: '47953673b7b51c2585402a91d434f5fe4d9dc105')
* content - str (file contents, always a UTF-8 string)
* src_encoding - str (original file's encoding, example: 'UTF-8')
language - str (programming language label, example: 'Python')
is_vendor - bool (True if it's a vendor file, e.g. 3rd party library)
is_generated - bool (True if the file is auto-generated)
|
[
"## 1. 'commits'\nThe commit message table. Join it with 'commit_filepath' on 'commits.directory_id == commit_filepath.directory_id' and 'commits.parent_directory_id == commit_filepath.directory_id'.\n\n* origin - str\t(repo url, example: 'URL\n* full_name - str\t(repo name, example: 'huggingface/datasets')\n* commit_id - str\t(example: '56b114ebfd5399252dc23f9df207f87c5397b50a')\n* parent_commit_id - str\t(previous commit id, example: '8c826fb80f7f8135f6e632d34c8f59134f5983c8')\n* snapshot_id - str\t(Software Heritage snapshot id, example: 'd76232879a5912b1eaca91e8889863117bca66a4')\n* visit_date - datetime[ns]\t(Software Heritage crawler visit date, example: '2022-11-28 13:03:09.100114')\n* branch_name - str\t(example: 'refs/heads/main')\n* revision_date - datetime[ns] (commit date, example: '2022-04-27 17:30:41')\t\n* committer_date - datetime[ns]\t(commit date, example: '2022-04-27 17:30:41')\t\t\n* author - binary\t(Software Heritage anonymized commit's author name)\n* message - str\t(commit message, example: 'update auth when mirroring datasets on the hub (#4242)')\n* directory_id - str (root directory id to join this table with the files, example: '84c6cc5b2c156ed3251674c43dd411d731183bb3')\n* parent_directory_id - str\t(parent commit's root directory id, example: 'e927ce1cdecf6286f7e23204ed656373c9921f89')",
"## 2. 'commit_filepath'\nThe file paths associated with each commit. Join it with 'file_conents' on 'blob_id'\n\n* directory_id - str\t(root directory id of the corresponding commit, example: '001331910958befd665d94c85c23471a8fc1ab19')\n* blob_id - str\t(Software Heritage file blob id, example: '47953673b7b51c2585402a91d434f5fe4d9dc105')\n* content_id - str\t(Software Heritage content id, example: 'd6b3dab547a59efe5246edf06a42e8e85776acb1')\n* path - str\t(file path inside the repository, example: '\t/core/src/components/tab-bar/usage/URL')\n* length - i64 (file length in bytes, example: '529')",
"## 3. 'file_contents'\nThe contents of all non-binary files (i.e. not images/media/data).\n\n* blob_id - str\t(Software Heritage file blob id, example: '47953673b7b51c2585402a91d434f5fe4d9dc105')\n* content - str\t(file contents, always a UTF-8 string)\n* src_encoding - str\t(original file's encoding, example: 'UTF-8')\nlanguage - str\t(programming language label, example: 'Python')\nis_vendor - bool\t(True if it's a vendor file, e.g. 3rd party library)\nis_generated - bool (True if the file is auto-generated)"
] |
[
"TAGS\n#region-us \n",
"## 1. 'commits'\nThe commit message table. Join it with 'commit_filepath' on 'commits.directory_id == commit_filepath.directory_id' and 'commits.parent_directory_id == commit_filepath.directory_id'.\n\n* origin - str\t(repo url, example: 'URL\n* full_name - str\t(repo name, example: 'huggingface/datasets')\n* commit_id - str\t(example: '56b114ebfd5399252dc23f9df207f87c5397b50a')\n* parent_commit_id - str\t(previous commit id, example: '8c826fb80f7f8135f6e632d34c8f59134f5983c8')\n* snapshot_id - str\t(Software Heritage snapshot id, example: 'd76232879a5912b1eaca91e8889863117bca66a4')\n* visit_date - datetime[ns]\t(Software Heritage crawler visit date, example: '2022-11-28 13:03:09.100114')\n* branch_name - str\t(example: 'refs/heads/main')\n* revision_date - datetime[ns] (commit date, example: '2022-04-27 17:30:41')\t\n* committer_date - datetime[ns]\t(commit date, example: '2022-04-27 17:30:41')\t\t\n* author - binary\t(Software Heritage anonymized commit's author name)\n* message - str\t(commit message, example: 'update auth when mirroring datasets on the hub (#4242)')\n* directory_id - str (root directory id to join this table with the files, example: '84c6cc5b2c156ed3251674c43dd411d731183bb3')\n* parent_directory_id - str\t(parent commit's root directory id, example: 'e927ce1cdecf6286f7e23204ed656373c9921f89')",
"## 2. 'commit_filepath'\nThe file paths associated with each commit. Join it with 'file_conents' on 'blob_id'\n\n* directory_id - str\t(root directory id of the corresponding commit, example: '001331910958befd665d94c85c23471a8fc1ab19')\n* blob_id - str\t(Software Heritage file blob id, example: '47953673b7b51c2585402a91d434f5fe4d9dc105')\n* content_id - str\t(Software Heritage content id, example: 'd6b3dab547a59efe5246edf06a42e8e85776acb1')\n* path - str\t(file path inside the repository, example: '\t/core/src/components/tab-bar/usage/URL')\n* length - i64 (file length in bytes, example: '529')",
"## 3. 'file_contents'\nThe contents of all non-binary files (i.e. not images/media/data).\n\n* blob_id - str\t(Software Heritage file blob id, example: '47953673b7b51c2585402a91d434f5fe4d9dc105')\n* content - str\t(file contents, always a UTF-8 string)\n* src_encoding - str\t(original file's encoding, example: 'UTF-8')\nlanguage - str\t(programming language label, example: 'Python')\nis_vendor - bool\t(True if it's a vendor file, e.g. 3rd party library)\nis_generated - bool (True if the file is auto-generated)"
] |
[
6,
494,
221,
181
] |
[
"passage: TAGS\n#region-us \n## 1. 'commits'\nThe commit message table. Join it with 'commit_filepath' on 'commits.directory_id == commit_filepath.directory_id' and 'commits.parent_directory_id == commit_filepath.directory_id'.\n\n* origin - str\t(repo url, example: 'URL\n* full_name - str\t(repo name, example: 'huggingface/datasets')\n* commit_id - str\t(example: '56b114ebfd5399252dc23f9df207f87c5397b50a')\n* parent_commit_id - str\t(previous commit id, example: '8c826fb80f7f8135f6e632d34c8f59134f5983c8')\n* snapshot_id - str\t(Software Heritage snapshot id, example: 'd76232879a5912b1eaca91e8889863117bca66a4')\n* visit_date - datetime[ns]\t(Software Heritage crawler visit date, example: '2022-11-28 13:03:09.100114')\n* branch_name - str\t(example: 'refs/heads/main')\n* revision_date - datetime[ns] (commit date, example: '2022-04-27 17:30:41')\t\n* committer_date - datetime[ns]\t(commit date, example: '2022-04-27 17:30:41')\t\t\n* author - binary\t(Software Heritage anonymized commit's author name)\n* message - str\t(commit message, example: 'update auth when mirroring datasets on the hub (#4242)')\n* directory_id - str (root directory id to join this table with the files, example: '84c6cc5b2c156ed3251674c43dd411d731183bb3')\n* parent_directory_id - str\t(parent commit's root directory id, example: 'e927ce1cdecf6286f7e23204ed656373c9921f89')"
] |
540a26c2f2bb80cfd65d8fe6b687f00be44fc179
|
# Dataset Card for "ual-chatbot-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aprlc/ual-chatbot-train
|
[
"region:us"
] |
2023-09-21T13:10:05+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 121685, "num_examples": 142}], "download_size": 45906, "dataset_size": 121685}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T13:10:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ual-chatbot-train"
More Information needed
|
[
"# Dataset Card for \"ual-chatbot-train\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ual-chatbot-train\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ual-chatbot-train\"\n\nMore Information needed"
] |
bd192fb8d185620a265995153b54f5f946395d1f
|
# Dataset Card for "next-dataset-refined-batch-12000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
swaroopajit/next-dataset-refined-batch-12000
|
[
"region:us"
] |
2023-09-21T13:39:27+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 337818757.0, "num_examples": 1000}], "download_size": 312355831, "dataset_size": 337818757.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T13:41:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "next-dataset-refined-batch-12000"
More Information needed
|
[
"# Dataset Card for \"next-dataset-refined-batch-12000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"next-dataset-refined-batch-12000\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"next-dataset-refined-batch-12000\"\n\nMore Information needed"
] |
57fe2bb3e22b2fe2e9efc6053e608be549be5aef
|
# Dataset Card for "practice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AfshanAhmed/practice
|
[
"region:us"
] |
2023-09-21T13:39:57+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 107738558.0, "num_examples": 105}], "download_size": 107745050, "dataset_size": 107738558.0}}
|
2023-09-28T03:47:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "practice"
More Information needed
|
[
"# Dataset Card for \"practice\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"practice\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"practice\"\n\nMore Information needed"
] |
db87af0591d930f0d2c8f60c79f9a91ddd7a54b9
|
# Dataset Card for "next-dataset-refined-batch-13000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
swaroopajit/next-dataset-refined-batch-13000
|
[
"region:us"
] |
2023-09-21T13:42:59+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 53300861.0, "num_examples": 153}], "download_size": 48908543, "dataset_size": 53300861.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T13:43:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "next-dataset-refined-batch-13000"
More Information needed
|
[
"# Dataset Card for \"next-dataset-refined-batch-13000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"next-dataset-refined-batch-13000\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"next-dataset-refined-batch-13000\"\n\nMore Information needed"
] |
136f5bf249985d0a8a9adb381c1313340128b93d
|
# Note
> some rm data from public dataset
- format
```json
{
"history": [
["query1", "answer1"],
["query2", "answer2"]
],
"prompt": "query",
"input": "input for query",
"output": [
"output rank1",
"output rank2",
"output rank3"
]
}
```
Thanks
- [beyond/rlhf-reward-single-round-trans_chinese](https://huggingface.co/datasets/beyond/rlhf-reward-single-round-trans_chinese) :
- [dikw/hh_rlhf_cn](https://huggingface.co/datasets/dikw/hh_rlhf_cn)
- [liyucheng/zhihu_rlhf_3k](https://huggingface.co/datasets/liyucheng/zhihu_rlhf_3k)
|
ticoAg/rlhf_zh
|
[
"region:us"
] |
2023-09-21T13:51:03+00:00
|
{}
|
2023-09-21T13:52:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Note
> some rm data from public dataset
- format
Thanks
- beyond/rlhf-reward-single-round-trans_chinese :
- dikw/hh_rlhf_cn
- liyucheng/zhihu_rlhf_3k
|
[
"# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
[
"TAGS\n#region-us \n",
"# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
[
6,
60
] |
[
"passage: TAGS\n#region-us \n# Note\n> some rm data from public dataset\n\n- format\n\n\nThanks \n- beyond/rlhf-reward-single-round-trans_chinese : \n- dikw/hh_rlhf_cn\n- liyucheng/zhihu_rlhf_3k"
] |
fddb328cad72d4e8eff6368bc2a8da9e86fd4c02
|
# Parallel Global Voices (English-French)
Parallel Global Voices EN-FR is a parallel corpus generated from the Global Voices multilingual group of websites (http://globalvoices.org/), where volunteers publish and translate news stories in more than 40 languages. The original content from the Global Voices websites is available by the authors and publishers under a Creative Commons Attribution license. The content was crawled in July-August 2015 by researchers at the NLP group of the Institute for Language and Speech Processing. Documents that are translations of each other were paired on the basis of their link information. After document pairing, segment alignments were automatically extracted. The results of the automatic alignment at document and segment level are distributed under a Creative Commons Attribution license.
### Attribution details
Parallel Global Voices (English - French) was created for the European Language Resources Coordination Action (ELRC) (http://lr-coordination.eu/) by researchers at the NLP group of the Institute for Language and Speech Processing (http://www.ilsp.gr/) with primary data copyrighted by Parallel Global Voices (https://globalvoices.org/) and is licensed under "CC-BY 3.0" (https://creativecommons.org/licenses/by/3.0/).
|
Nicolas-BZRD/Parallel_Global_Voices_English_French
|
[
"task_categories:translation",
"size_categories:100K<n<1M",
"language:en",
"language:fr",
"license:cc-by-3.0",
"parallel",
"parallel data",
"region:us"
] |
2023-09-21T14:03:00+00:00
|
{"language": ["en", "fr"], "license": "cc-by-3.0", "size_categories": ["100K<n<1M"], "task_categories": ["translation"], "dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "fr", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 89720129, "num_examples": 342060}], "download_size": 57746668, "dataset_size": 89720129}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["parallel", "parallel data"]}
|
2023-09-21T14:40:05+00:00
|
[] |
[
"en",
"fr"
] |
TAGS
#task_categories-translation #size_categories-100K<n<1M #language-English #language-French #license-cc-by-3.0 #parallel #parallel data #region-us
|
# Parallel Global Voices (English-French)
Parallel Global Voices EN-FR is a parallel corpus generated from the Global Voices multilingual group of websites (URL where volunteers publish and translate news stories in more than 40 languages. The original content from the Global Voices websites is available by the authors and publishers under a Creative Commons Attribution license. The content was crawled in July-August 2015 by researchers at the NLP group of the Institute for Language and Speech Processing. Documents that are translations of each other were paired on the basis of their link information. After document pairing, segment alignments were automatically extracted. The results of the automatic alignment at document and segment level are distributed under a Creative Commons Attribution license.
### Attribution details
Parallel Global Voices (English - French) was created for the European Language Resources Coordination Action (ELRC) (URL by researchers at the NLP group of the Institute for Language and Speech Processing (URL with primary data copyrighted by Parallel Global Voices (URL and is licensed under "CC-BY 3.0" (URL
|
[
"# Parallel Global Voices (English-French)\n\nParallel Global Voices EN-FR is a parallel corpus generated from the Global Voices multilingual group of websites (URL where volunteers publish and translate news stories in more than 40 languages. The original content from the Global Voices websites is available by the authors and publishers under a Creative Commons Attribution license. The content was crawled in July-August 2015 by researchers at the NLP group of the Institute for Language and Speech Processing. Documents that are translations of each other were paired on the basis of their link information. After document pairing, segment alignments were automatically extracted. The results of the automatic alignment at document and segment level are distributed under a Creative Commons Attribution license.",
"### Attribution details\n\nParallel Global Voices (English - French) was created for the European Language Resources Coordination Action (ELRC) (URL by researchers at the NLP group of the Institute for Language and Speech Processing (URL with primary data copyrighted by Parallel Global Voices (URL and is licensed under \"CC-BY 3.0\" (URL"
] |
[
"TAGS\n#task_categories-translation #size_categories-100K<n<1M #language-English #language-French #license-cc-by-3.0 #parallel #parallel data #region-us \n",
"# Parallel Global Voices (English-French)\n\nParallel Global Voices EN-FR is a parallel corpus generated from the Global Voices multilingual group of websites (URL where volunteers publish and translate news stories in more than 40 languages. The original content from the Global Voices websites is available by the authors and publishers under a Creative Commons Attribution license. The content was crawled in July-August 2015 by researchers at the NLP group of the Institute for Language and Speech Processing. Documents that are translations of each other were paired on the basis of their link information. After document pairing, segment alignments were automatically extracted. The results of the automatic alignment at document and segment level are distributed under a Creative Commons Attribution license.",
"### Attribution details\n\nParallel Global Voices (English - French) was created for the European Language Resources Coordination Action (ELRC) (URL by researchers at the NLP group of the Institute for Language and Speech Processing (URL with primary data copyrighted by Parallel Global Voices (URL and is licensed under \"CC-BY 3.0\" (URL"
] |
[
55,
160,
73
] |
[
"passage: TAGS\n#task_categories-translation #size_categories-100K<n<1M #language-English #language-French #license-cc-by-3.0 #parallel #parallel data #region-us \n# Parallel Global Voices (English-French)\n\nParallel Global Voices EN-FR is a parallel corpus generated from the Global Voices multilingual group of websites (URL where volunteers publish and translate news stories in more than 40 languages. The original content from the Global Voices websites is available by the authors and publishers under a Creative Commons Attribution license. The content was crawled in July-August 2015 by researchers at the NLP group of the Institute for Language and Speech Processing. Documents that are translations of each other were paired on the basis of their link information. After document pairing, segment alignments were automatically extracted. The results of the automatic alignment at document and segment level are distributed under a Creative Commons Attribution license.### Attribution details\n\nParallel Global Voices (English - French) was created for the European Language Resources Coordination Action (ELRC) (URL by researchers at the NLP group of the Institute for Language and Speech Processing (URL with primary data copyrighted by Parallel Global Voices (URL and is licensed under \"CC-BY 3.0\" (URL"
] |
9363c426446f40f667cce232975d08827c9eba3e
|
# Dataset Card for "qa_wikipedia_retrieved_chunks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
legacy107/qa_wikipedia_retrieved_chunks
|
[
"region:us"
] |
2023-09-21T14:03:33+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer_start", "dtype": "int64"}, {"name": "answer", "dtype": "string"}, {"name": "article", "dtype": "string"}, {"name": "retrieved_context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6212832895, "num_examples": 110970}, {"name": "validation", "num_bytes": 732218436, "num_examples": 13833}, {"name": "test", "num_bytes": 763004753, "num_examples": 13873}], "download_size": 420701697, "dataset_size": 7708056084}}
|
2023-09-28T04:16:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "qa_wikipedia_retrieved_chunks"
More Information needed
|
[
"# Dataset Card for \"qa_wikipedia_retrieved_chunks\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"qa_wikipedia_retrieved_chunks\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"qa_wikipedia_retrieved_chunks\"\n\nMore Information needed"
] |
8ae3b3e2818be84692f62f80731c2af285023790
|
# [WIP] Dataset Card for "citizenship-test-da"
*Please note that this dataset and dataset card both are works in progress. For now refer to the related [thesis](https://sorenmulli.github.io/thesis/thesis.pdf) for all details*
This dataset contains scraped questions and answers from Danish citizen tests (Danish: *indfødsretsprøver* og *medborgerskabsprøver*) from Juni 2019 to May 2023 from PDF's produced by ''Styrelsen for International Rekruttering og Integration'' (SIRI).
The dataset is released as an appendix to the thesis [''Are GLLMs Danoliterate? Benchmarking Generative NLP in Danish''](https://sorenmulli.github.io/thesis/thesis.pdf) and permission by SIRI for this specific purpose.
The PDF's are available on [SIRI's website](https://siri.dk/nyheder/?categorizations=9115).
The `default` configuration has been semi-automatically cleaned to remove PDF artifacts using the [Alvenir 3gram DSL language model](https://github.com/danspeech/danspeech/releases/tag/v0.02-alpha).
The examples were not deduplicated.
|
sorenmulli/citizenship-test-da
|
[
"region:us"
] |
2023-09-21T14:15:51+00:00
|
{"dataset_info": [{"config_name": "default", "features": [{"name": "question", "dtype": "string"}, {"name": "index", "dtype": "int64"}, {"name": "option-A", "dtype": "string"}, {"name": "option-B", "dtype": "string"}, {"name": "option-C", "dtype": "string"}, {"name": "correct", "dtype": "string"}, {"name": "origin", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 103251.0, "num_examples": 605}], "download_size": 43667, "dataset_size": 103251.0}, {"config_name": "raw", "features": [{"name": "question", "dtype": "string"}, {"name": "index", "dtype": "int64"}, {"name": "option-A", "dtype": "string"}, {"name": "option-B", "dtype": "string"}, {"name": "option-C", "dtype": "string"}, {"name": "correct", "dtype": "string"}, {"name": "origin", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 103906, "num_examples": 605}], "download_size": 45297, "dataset_size": 103906}], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}, {"config_name": "raw", "data_files": [{"split": "train", "path": "raw/train-*"}]}]}
|
2024-01-15T19:34:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# [WIP] Dataset Card for "citizenship-test-da"
*Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*
This dataset contains scraped questions and answers from Danish citizen tests (Danish: *indfødsretsprøver* og *medborgerskabsprøver*) from Juni 2019 to May 2023 from PDF's produced by ''Styrelsen for International Rekruttering og Integration'' (SIRI).
The dataset is released as an appendix to the thesis ''Are GLLMs Danoliterate? Benchmarking Generative NLP in Danish'' and permission by SIRI for this specific purpose.
The PDF's are available on SIRI's website.
The 'default' configuration has been semi-automatically cleaned to remove PDF artifacts using the Alvenir 3gram DSL language model.
The examples were not deduplicated.
|
[
"# [WIP] Dataset Card for \"citizenship-test-da\"\n\n*Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*\n\nThis dataset contains scraped questions and answers from Danish citizen tests (Danish: *indfødsretsprøver* og *medborgerskabsprøver*) from Juni 2019 to May 2023 from PDF's produced by ''Styrelsen for International Rekruttering og Integration'' (SIRI).\n\nThe dataset is released as an appendix to the thesis ''Are GLLMs Danoliterate? Benchmarking Generative NLP in Danish'' and permission by SIRI for this specific purpose.\n\n\nThe PDF's are available on SIRI's website.\nThe 'default' configuration has been semi-automatically cleaned to remove PDF artifacts using the Alvenir 3gram DSL language model.\nThe examples were not deduplicated."
] |
[
"TAGS\n#region-us \n",
"# [WIP] Dataset Card for \"citizenship-test-da\"\n\n*Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*\n\nThis dataset contains scraped questions and answers from Danish citizen tests (Danish: *indfødsretsprøver* og *medborgerskabsprøver*) from Juni 2019 to May 2023 from PDF's produced by ''Styrelsen for International Rekruttering og Integration'' (SIRI).\n\nThe dataset is released as an appendix to the thesis ''Are GLLMs Danoliterate? Benchmarking Generative NLP in Danish'' and permission by SIRI for this specific purpose.\n\n\nThe PDF's are available on SIRI's website.\nThe 'default' configuration has been semi-automatically cleaned to remove PDF artifacts using the Alvenir 3gram DSL language model.\nThe examples were not deduplicated."
] |
[
6,
210
] |
[
"passage: TAGS\n#region-us \n# [WIP] Dataset Card for \"citizenship-test-da\"\n\n*Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*\n\nThis dataset contains scraped questions and answers from Danish citizen tests (Danish: *indfødsretsprøver* og *medborgerskabsprøver*) from Juni 2019 to May 2023 from PDF's produced by ''Styrelsen for International Rekruttering og Integration'' (SIRI).\n\nThe dataset is released as an appendix to the thesis ''Are GLLMs Danoliterate? Benchmarking Generative NLP in Danish'' and permission by SIRI for this specific purpose.\n\n\nThe PDF's are available on SIRI's website.\nThe 'default' configuration has been semi-automatically cleaned to remove PDF artifacts using the Alvenir 3gram DSL language model.\nThe examples were not deduplicated."
] |
9725ad9fbe6d0e41707ea198f22a2fc1dc6193fd
|
# Bangumi Image Base of Fate Stay Night [ufotable]
This is the image base of bangumi Fate Stay Night [UFOTABLE], we detected 27 characters, 3899 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 742 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 31 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 49 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 74 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 117 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 62 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 19 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 1211 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 74 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 98 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 7 | [Download](10/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 11 | 117 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 306 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 18 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 164 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 330 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 18 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 60 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 76 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 20 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 16 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 38 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 34 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 34 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 8 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 11 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 165 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/fatestaynightufotable
|
[
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] |
2023-09-21T14:20:37+00:00
|
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
|
2023-09-29T08:53:51+00:00
|
[] |
[] |
TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
|
Bangumi Image Base of Fate Stay Night [ufotable]
================================================
This is the image base of bangumi Fate Stay Night [UFOTABLE], we detected 27 characters, 3899 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
|
[] |
[
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
[
25
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
7556455f20885296d1ffac19cd80c052cc573eb0
|
# Dataset Card for "coco_captions_T5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Doub7e/coco_captions_T5
|
[
"region:us"
] |
2023-09-21T14:33:46+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "blip_caption_beam_5", "dtype": "string"}, {"name": "T5_last_hidden_states", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "sentences_raw", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 416620666.0, "num_examples": 5000}], "download_size": 445433251, "dataset_size": 416620666.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T14:34:12+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "coco_captions_T5"
More Information needed
|
[
"# Dataset Card for \"coco_captions_T5\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"coco_captions_T5\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"coco_captions_T5\"\n\nMore Information needed"
] |
ce8192a3e6955b3752f865a17fa48460791a87c5
|
# Dataset Card for "climate-global-temp-country"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vitaliy-sharandin/climate-global-temp-country
|
[
"region:us"
] |
2023-09-21T14:44:35+00:00
|
{"dataset_info": {"features": [{"name": "Year", "dtype": "int64"}, {"name": "China", "dtype": "float64"}, {"name": "India", "dtype": "float64"}, {"name": "Poland", "dtype": "float64"}, {"name": "United States", "dtype": "float64"}, {"name": "World", "dtype": "float64"}, {"name": "dt", "dtype": "timestamp[ns, tz=UTC]"}], "splits": [{"name": "train", "num_bytes": 3472, "num_examples": 62}], "download_size": 7056, "dataset_size": 3472}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T14:48:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "climate-global-temp-country"
More Information needed
|
[
"# Dataset Card for \"climate-global-temp-country\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"climate-global-temp-country\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"climate-global-temp-country\"\n\nMore Information needed"
] |
7698fde9abcef0a6b2c6026569f46991a45230c6
|
# Dataset Card for "Hourly_London_Bexley_01-01-2010_TO_01-01-2015"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
luqman8001/Hourly_London_Bexley_01-01-2010_TO_01-01-2015
|
[
"region:us"
] |
2023-09-21T14:55:09+00:00
|
{"dataset_info": {"features": [{"name": "Ozone", "dtype": "float32"}, {"name": "Nitric oxide", "dtype": "float32"}, {"name": "Nitrogen dioxide", "dtype": "float32"}, {"name": "Nitrogen oxides as nitrogen dioxide", "dtype": "float32"}, {"name": "Sulphur dioxide", "dtype": "float32"}, {"name": "Carbon monoxide", "dtype": "float32"}, {"name": "PM10 particulate matter (Hourly measured)", "dtype": "float32"}, {"name": "Non-volatile PM10 (Hourly measured)", "dtype": "float32"}, {"name": "Volatile PM10 (Hourly measured)", "dtype": "float32"}, {"name": "PM2.5 particulate matter (Hourly measured)", "dtype": "float32"}, {"name": "Non-volatile PM2.5 (Hourly measured)", "dtype": "float32"}, {"name": "Volatile PM2.5 (Hourly measured)", "dtype": "float32"}, {"name": "Modelled Wind Direction", "dtype": "float32"}, {"name": "Modelled Wind Speed", "dtype": "float32"}, {"name": "Modelled Temperature", "dtype": "float32"}, {"name": "Datetime", "dtype": "timestamp[ns]"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1152084, "num_examples": 15159}], "download_size": 565446, "dataset_size": 1152084}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T14:55:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Hourly_London_Bexley_01-01-2010_TO_01-01-2015"
More Information needed
|
[
"# Dataset Card for \"Hourly_London_Bexley_01-01-2010_TO_01-01-2015\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Hourly_London_Bexley_01-01-2010_TO_01-01-2015\"\n\nMore Information needed"
] |
[
6,
29
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Hourly_London_Bexley_01-01-2010_TO_01-01-2015\"\n\nMore Information needed"
] |
d4c4ac9a79821d52c35a4c9aef20917a3bd51746
|
# Dataset Card for "jawiki-20230911"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tmfi/jawiki-20230911
|
[
"region:us"
] |
2023-09-21T15:02:39+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8129791520, "num_examples": 1386531}], "download_size": 3964405981, "dataset_size": 8129791520}}
|
2023-09-21T15:23:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "jawiki-20230911"
More Information needed
|
[
"# Dataset Card for \"jawiki-20230911\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"jawiki-20230911\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"jawiki-20230911\"\n\nMore Information needed"
] |
a373643dc54a41677c635c32fe8ce54958b27b8d
|
# Dataset Card for "grammarly_coedit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/grammarly_coedit
|
[
"region:us"
] |
2023-09-21T15:25:13+00:00
|
{"dataset_info": {"features": [{"name": "_id", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "src", "dtype": "string"}, {"name": "tgt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19943349, "num_examples": 82466}], "download_size": 11658767, "dataset_size": 19943349}}
|
2023-09-21T15:25:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "grammarly_coedit"
More Information needed
|
[
"# Dataset Card for \"grammarly_coedit\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"grammarly_coedit\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"grammarly_coedit\"\n\nMore Information needed"
] |
cf4c12747c8caa4bbc9e1ae4aacc4842cf676c59
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
Burgod/Pepeto
|
[
"region:us"
] |
2023-09-21T15:28:38+00:00
|
{}
|
2023-09-21T16:50:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
ac8d17bafded11a43c66aa81470a1be9d40ce3e7
|
# Dataset Card for "data_for_synthesis_with_entities_align_v5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thanhduycao/data_for_synthesis_with_entities_align_v5
|
[
"region:us"
] |
2023-09-21T15:42:12+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "sentence_annotation", "dtype": "string"}, {"name": "entities", "list": [{"name": "type", "dtype": "string"}, {"name": "filler", "dtype": "string"}]}, {"name": "file", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "origin_transcription", "dtype": "string"}, {"name": "sentence_norm", "dtype": "string"}, {"name": "sentence_norm_v2", "dtype": "string"}, {"name": "w2v2_large_transcription", "dtype": "string"}, {"name": "wer", "dtype": "float64"}, {"name": "entities_norm", "list": [{"name": "filler", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "entities_align", "dtype": "string"}, {"name": "entities_score", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2358311345, "num_examples": 4430}], "download_size": 446162189, "dataset_size": 2358311345}}
|
2023-09-21T17:17:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data_for_synthesis_with_entities_align_v5"
More Information needed
|
[
"# Dataset Card for \"data_for_synthesis_with_entities_align_v5\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data_for_synthesis_with_entities_align_v5\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data_for_synthesis_with_entities_align_v5\"\n\nMore Information needed"
] |
a8ed650c98064a40770bd8ad6ac4d9fca57e8dd7
|
# Dataset Card for "kinopoisk_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/kinopoisk_raw
|
[
"region:us"
] |
2023-09-21T15:50:44+00:00
|
{"dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "grade3", "dtype": "string"}, {"name": "movie_name", "dtype": "string"}, {"name": "part", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "grade10", "dtype": "string"}, {"name": "Idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 138684842, "num_examples": 36591}], "download_size": 70387577, "dataset_size": 138684842}}
|
2023-09-21T15:54:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "kinopoisk_raw"
More Information needed
|
[
"# Dataset Card for \"kinopoisk_raw\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"kinopoisk_raw\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"kinopoisk_raw\"\n\nMore Information needed"
] |
53f92f3f403ed3107a3cf8ec70f53c40922e2933
|
# Greetings [TXT dataset]
A dataset comprising artificially generated **greetings** derived from a diverse array of Large Language Models (LLMs) such as GPT-3.5, GPT-4, Claude, Bard, Alpaca, LLaMA, LLaMA-2, Vicuna, and PaLM-2. These greetings cover various types and are expressed in multiple languages.
## Prompt
The prompt used:
```txt
Please generate a diverse range of English greetings, and I'll guide you to continue if I require more. You can also incorporate greetings from different languages and cultures for added diversity. No need for explanations or additional information.
```
## TODO
- Categorize them into types (Formal, Informal/Casual, Professional, Family, Friendship, Multilingual, ...) and Cultural Origin (General, Indian, British, Australian, ...)
## Disclaimer
Please note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality.
|
Tanvir1337/greetings
|
[
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"license:cdla-sharing-1.0",
"GPT-3.5",
"GPT-4",
"Claude",
"Bard",
"Alpaca",
"LLaMA",
"LLaMA-2",
"Vicuna",
"PaLM-2",
"Multilingual",
"region:us"
] |
2023-09-21T15:52:51+00:00
|
{"license": "cdla-sharing-1.0", "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "pretty_name": "Greetings", "tags": ["GPT-3.5", "GPT-4", "Claude", "Bard", "Alpaca", "LLaMA", "LLaMA-2", "Vicuna", "PaLM-2", "Multilingual"]}
|
2023-10-14T14:10:38+00:00
|
[] |
[] |
TAGS
#multilinguality-multilingual #size_categories-1K<n<10K #license-cdla-sharing-1.0 #GPT-3.5 #GPT-4 #Claude #Bard #Alpaca #LLaMA #LLaMA-2 #Vicuna #PaLM-2 #Multilingual #region-us
|
# Greetings [TXT dataset]
A dataset comprising artificially generated greetings derived from a diverse array of Large Language Models (LLMs) such as GPT-3.5, GPT-4, Claude, Bard, Alpaca, LLaMA, LLaMA-2, Vicuna, and PaLM-2. These greetings cover various types and are expressed in multiple languages.
## Prompt
The prompt used:
## TODO
- Categorize them into types (Formal, Informal/Casual, Professional, Family, Friendship, Multilingual, ...) and Cultural Origin (General, Indian, British, Australian, ...)
## Disclaimer
Please note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality.
|
[
"# Greetings [TXT dataset]\n\nA dataset comprising artificially generated greetings derived from a diverse array of Large Language Models (LLMs) such as GPT-3.5, GPT-4, Claude, Bard, Alpaca, LLaMA, LLaMA-2, Vicuna, and PaLM-2. These greetings cover various types and are expressed in multiple languages.",
"## Prompt\n\nThe prompt used:",
"## TODO\n\n- Categorize them into types (Formal, Informal/Casual, Professional, Family, Friendship, Multilingual, ...) and Cultural Origin (General, Indian, British, Australian, ...)",
"## Disclaimer\n\nPlease note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality."
] |
[
"TAGS\n#multilinguality-multilingual #size_categories-1K<n<10K #license-cdla-sharing-1.0 #GPT-3.5 #GPT-4 #Claude #Bard #Alpaca #LLaMA #LLaMA-2 #Vicuna #PaLM-2 #Multilingual #region-us \n",
"# Greetings [TXT dataset]\n\nA dataset comprising artificially generated greetings derived from a diverse array of Large Language Models (LLMs) such as GPT-3.5, GPT-4, Claude, Bard, Alpaca, LLaMA, LLaMA-2, Vicuna, and PaLM-2. These greetings cover various types and are expressed in multiple languages.",
"## Prompt\n\nThe prompt used:",
"## TODO\n\n- Categorize them into types (Formal, Informal/Casual, Professional, Family, Friendship, Multilingual, ...) and Cultural Origin (General, Indian, British, Australian, ...)",
"## Disclaimer\n\nPlease note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality."
] |
[
77,
92,
8,
47,
73
] |
[
"passage: TAGS\n#multilinguality-multilingual #size_categories-1K<n<10K #license-cdla-sharing-1.0 #GPT-3.5 #GPT-4 #Claude #Bard #Alpaca #LLaMA #LLaMA-2 #Vicuna #PaLM-2 #Multilingual #region-us \n# Greetings [TXT dataset]\n\nA dataset comprising artificially generated greetings derived from a diverse array of Large Language Models (LLMs) such as GPT-3.5, GPT-4, Claude, Bard, Alpaca, LLaMA, LLaMA-2, Vicuna, and PaLM-2. These greetings cover various types and are expressed in multiple languages.## Prompt\n\nThe prompt used:## TODO\n\n- Categorize them into types (Formal, Informal/Casual, Professional, Family, Friendship, Multilingual, ...) and Cultural Origin (General, Indian, British, Australian, ...)## Disclaimer\n\nPlease note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality."
] |
703def4e438d580e5950dc472447be67437f72c9
|
# Dataset Card for "PIPPA-lmgym"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/PIPPA-lmgym-old
|
[
"region:us"
] |
2023-09-21T16:02:23+00:00
|
{"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33744003688, "num_examples": 415409}], "download_size": 0, "dataset_size": 33744003688}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T17:49:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "PIPPA-lmgym"
More Information needed
|
[
"# Dataset Card for \"PIPPA-lmgym\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"PIPPA-lmgym\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"PIPPA-lmgym\"\n\nMore Information needed"
] |
aa4f34d3d2d3231299b5b03d9b3e5a20da45aa18
|
View the project page:
https://meta-math.github.io/
see our paper at https://arxiv.org/abs/2309.12284
## Note
All MetaMathQA data are augmented from the training sets of GSM8K and MATH.
<span style="color:red"><b>None of the augmented data is from the testing set.</b></span>
You can check the `original_question` in `meta-math/MetaMathQA`, each item is from the GSM8K or MATH train set.
## Model Details
MetaMath-Mistral-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Mistral-7B model. It is glad to see using MetaMathQA datasets and changing the base model from llama-2-7B to Mistral-7b can boost the GSM8K performance from 66.5 to **77.7**.
To fine-tune Mistral-7B, I would suggest using a smaller learning rate (usually 1/5 to 1/10 of the lr for LlaMa-2-7B) and staying other training args unchanged.
More training details and scripts can be seen at [https://github.com/meta-math/MetaMath](https://github.com/meta-math/MetaMath).
## Installation
```
pip install transformers==4.35.0
pip install torch==2.0.1
pip install sentencepiece==0.1.99
pip install tokenizers==0.13.3
pip install accelerate==0.21.0
pip install bitsandbytes==0.40.0
pip install vllm
pip install fraction
pip install protobuf
```
## Model Usage
prompting template:
'''
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
'''
where you need to use your query question to replace the {instruction}
There is another interesting repo about Arithmo-Mistral-7B at [https://huggingface.co/akjindal53244/Arithmo-Mistral-7B](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B), where they combine our MetaMathQA dataset and MathInstruct datasets to train a powerful model. Thanks agian for their contributions.
We would also try to train the combination of **MetaMathQA** and **MathInstruct** datasets, and also open all the results and training details.
## Experiments
| Model | GSM8k Pass@1 | MATH Pass@1 |
|---------------------|--------------|-------------|
| MPT-7B | 6.8 | 3.0 |
| Falcon-7B | 6.8 | 2.3 |
| LLaMA-1-7B | 11.0 | 2.9 |
| LLaMA-2-7B | 14.6 | 2.5 |
| MPT-30B | 15.2 | 3.1 |
| LLaMA-1-13B | 17.8 | 3.9 |
| GPT-Neo-2.7B | 19.5 | -- |
| Falcon-40B | 19.6 | 2.5 |
| Baichuan-chat-13B | 23.9 | -- |
| Vicuna-v1.3-13B | 27.6 | -- |
| LLaMA-2-13B | 28.7 | 3.9 |
| InternLM-7B | 31.2 | -- |
| ChatGLM-2-6B | 32.4 | -- |
| GPT-J-6B | 34.9 | -- |
| LLaMA-1-33B | 35.6 | 3.9 |
| LLaMA-2-34B | 42.2 | 6.24 |
| RFT-7B | 50.3 | -- |
| LLaMA-1-65B | 50.9 | 10.6 |
| Qwen-7B | 51.6 | -- |
| WizardMath-7B | 54.9 | 10.7 |
| LLaMA-2-70B | 56.8 | 13.5 |
| WizardMath-13B | 63.9 | 14.0 |
| MAmmoTH-7B (COT) | 50.5 | 10.4 |
| MAmmoTH-7B (POT+COT)| 53.6 | 31.5 |
| Arithmo-Mistral-7B | 74.7 | 25.3 |
| MetaMath-7B | 66.5 | 19.8 |
| MetaMath-13B | 72.3 | 22.4 |
| 🔥 **MetaMath-Mistral-7B** | **77.7** | **28.2** |
We encourage anyone to use our MetaMathQA datasets. We are very happy to see the following models trained by MetaMathQA achieve a very promising performance!
OpenChat-3.5 (https://huggingface.co/openchat/openchat_3.5)
CausalLM (https://huggingface.co/CausalLM/14B)
zephyr (https://huggingface.co/qblocks/zephyr-7b-alpha_metamathqa)
Ziya2 (https://huggingface.co/IDEA-CCNL/Ziya2-13B-Base)
# Citation
```bibtex
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
journal={arXiv preprint arXiv:2309.12284},
year={2023}
}
```
|
meta-math/MetaMathQA
|
[
"license:mit",
"math",
"math-qa",
"arxiv:2309.12284",
"region:us"
] |
2023-09-21T16:22:46+00:00
|
{"license": "mit", "tags": ["math", "math-qa"]}
|
2023-12-21T01:35:53+00:00
|
[
"2309.12284"
] |
[] |
TAGS
#license-mit #math #math-qa #arxiv-2309.12284 #region-us
|
View the project page:
URL
see our paper at URL
Note
----
All MetaMathQA data are augmented from the training sets of GSM8K and MATH.
**None of the augmented data is from the testing set.**
You can check the 'original\_question' in 'meta-math/MetaMathQA', each item is from the GSM8K or MATH train set.
Model Details
-------------
MetaMath-Mistral-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Mistral-7B model. It is glad to see using MetaMathQA datasets and changing the base model from llama-2-7B to Mistral-7b can boost the GSM8K performance from 66.5 to 77.7.
To fine-tune Mistral-7B, I would suggest using a smaller learning rate (usually 1/5 to 1/10 of the lr for LlaMa-2-7B) and staying other training args unchanged.
More training details and scripts can be seen at URL
Installation
------------
Model Usage
-----------
prompting template:
'''
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
'''
where you need to use your query question to replace the {instruction}
There is another interesting repo about Arithmo-Mistral-7B at URL where they combine our MetaMathQA dataset and MathInstruct datasets to train a powerful model. Thanks agian for their contributions.
We would also try to train the combination of MetaMathQA and MathInstruct datasets, and also open all the results and training details.
Experiments
-----------
Model: MPT-7B, GSM8k Pass@1: 6.8, MATH Pass@1: 3.0
Model: Falcon-7B, GSM8k Pass@1: 6.8, MATH Pass@1: 2.3
Model: LLaMA-1-7B, GSM8k Pass@1: 11.0, MATH Pass@1: 2.9
Model: LLaMA-2-7B, GSM8k Pass@1: 14.6, MATH Pass@1: 2.5
Model: MPT-30B, GSM8k Pass@1: 15.2, MATH Pass@1: 3.1
Model: LLaMA-1-13B, GSM8k Pass@1: 17.8, MATH Pass@1: 3.9
Model: GPT-Neo-2.7B, GSM8k Pass@1: 19.5, MATH Pass@1: --
Model: Falcon-40B, GSM8k Pass@1: 19.6, MATH Pass@1: 2.5
Model: Baichuan-chat-13B, GSM8k Pass@1: 23.9, MATH Pass@1: --
Model: Vicuna-v1.3-13B, GSM8k Pass@1: 27.6, MATH Pass@1: --
Model: LLaMA-2-13B, GSM8k Pass@1: 28.7, MATH Pass@1: 3.9
Model: InternLM-7B, GSM8k Pass@1: 31.2, MATH Pass@1: --
Model: ChatGLM-2-6B, GSM8k Pass@1: 32.4, MATH Pass@1: --
Model: GPT-J-6B, GSM8k Pass@1: 34.9, MATH Pass@1: --
Model: LLaMA-1-33B, GSM8k Pass@1: 35.6, MATH Pass@1: 3.9
Model: LLaMA-2-34B, GSM8k Pass@1: 42.2, MATH Pass@1: 6.24
Model: RFT-7B, GSM8k Pass@1: 50.3, MATH Pass@1: --
Model: LLaMA-1-65B, GSM8k Pass@1: 50.9, MATH Pass@1: 10.6
Model: Qwen-7B, GSM8k Pass@1: 51.6, MATH Pass@1: --
Model: WizardMath-7B, GSM8k Pass@1: 54.9, MATH Pass@1: 10.7
Model: LLaMA-2-70B, GSM8k Pass@1: 56.8, MATH Pass@1: 13.5
Model: WizardMath-13B, GSM8k Pass@1: 63.9, MATH Pass@1: 14.0
Model: MAmmoTH-7B (COT), GSM8k Pass@1: 50.5, MATH Pass@1: 10.4
Model: MAmmoTH-7B (POT+COT), GSM8k Pass@1: 53.6, MATH Pass@1: 31.5
Model: Arithmo-Mistral-7B, GSM8k Pass@1: 74.7, MATH Pass@1: 25.3
Model: MetaMath-7B, GSM8k Pass@1: 66.5, MATH Pass@1: 19.8
Model: MetaMath-13B, GSM8k Pass@1: 72.3, MATH Pass@1: 22.4
Model: MetaMath-Mistral-7B, GSM8k Pass@1: 77.7, MATH Pass@1: 28.2
We encourage anyone to use our MetaMathQA datasets. We are very happy to see the following models trained by MetaMathQA achieve a very promising performance!
OpenChat-3.5 (URL
CausalLM (URL
zephyr (URL
Ziya2 (URL
|
[
"### Instruction:\\n{instruction}\\n\\n### Response: Let's think step by step.\"\n\n\n'''\n\n\nwhere you need to use your query question to replace the {instruction}\n\n\nThere is another interesting repo about Arithmo-Mistral-7B at URL where they combine our MetaMathQA dataset and MathInstruct datasets to train a powerful model. Thanks agian for their contributions.\nWe would also try to train the combination of MetaMathQA and MathInstruct datasets, and also open all the results and training details.\n\n\nExperiments\n-----------\n\n\nModel: MPT-7B, GSM8k Pass@1: 6.8, MATH Pass@1: 3.0\nModel: Falcon-7B, GSM8k Pass@1: 6.8, MATH Pass@1: 2.3\nModel: LLaMA-1-7B, GSM8k Pass@1: 11.0, MATH Pass@1: 2.9\nModel: LLaMA-2-7B, GSM8k Pass@1: 14.6, MATH Pass@1: 2.5\nModel: MPT-30B, GSM8k Pass@1: 15.2, MATH Pass@1: 3.1\nModel: LLaMA-1-13B, GSM8k Pass@1: 17.8, MATH Pass@1: 3.9\nModel: GPT-Neo-2.7B, GSM8k Pass@1: 19.5, MATH Pass@1: --\nModel: Falcon-40B, GSM8k Pass@1: 19.6, MATH Pass@1: 2.5\nModel: Baichuan-chat-13B, GSM8k Pass@1: 23.9, MATH Pass@1: --\nModel: Vicuna-v1.3-13B, GSM8k Pass@1: 27.6, MATH Pass@1: --\nModel: LLaMA-2-13B, GSM8k Pass@1: 28.7, MATH Pass@1: 3.9\nModel: InternLM-7B, GSM8k Pass@1: 31.2, MATH Pass@1: --\nModel: ChatGLM-2-6B, GSM8k Pass@1: 32.4, MATH Pass@1: --\nModel: GPT-J-6B, GSM8k Pass@1: 34.9, MATH Pass@1: --\nModel: LLaMA-1-33B, GSM8k Pass@1: 35.6, MATH Pass@1: 3.9\nModel: LLaMA-2-34B, GSM8k Pass@1: 42.2, MATH Pass@1: 6.24\nModel: RFT-7B, GSM8k Pass@1: 50.3, MATH Pass@1: --\nModel: LLaMA-1-65B, GSM8k Pass@1: 50.9, MATH Pass@1: 10.6\nModel: Qwen-7B, GSM8k Pass@1: 51.6, MATH Pass@1: --\nModel: WizardMath-7B, GSM8k Pass@1: 54.9, MATH Pass@1: 10.7\nModel: LLaMA-2-70B, GSM8k Pass@1: 56.8, MATH Pass@1: 13.5\nModel: WizardMath-13B, GSM8k Pass@1: 63.9, MATH Pass@1: 14.0\nModel: MAmmoTH-7B (COT), GSM8k Pass@1: 50.5, MATH Pass@1: 10.4\nModel: MAmmoTH-7B (POT+COT), GSM8k Pass@1: 53.6, MATH Pass@1: 31.5\nModel: Arithmo-Mistral-7B, GSM8k Pass@1: 74.7, MATH Pass@1: 25.3\nModel: MetaMath-7B, GSM8k Pass@1: 66.5, MATH Pass@1: 19.8\nModel: MetaMath-13B, GSM8k Pass@1: 72.3, MATH Pass@1: 22.4\nModel: MetaMath-Mistral-7B, GSM8k Pass@1: 77.7, MATH Pass@1: 28.2\n\n\nWe encourage anyone to use our MetaMathQA datasets. We are very happy to see the following models trained by MetaMathQA achieve a very promising performance!\n\n\nOpenChat-3.5 (URL\n\n\nCausalLM (URL\n\n\nzephyr (URL\n\n\nZiya2 (URL"
] |
[
"TAGS\n#license-mit #math #math-qa #arxiv-2309.12284 #region-us \n",
"### Instruction:\\n{instruction}\\n\\n### Response: Let's think step by step.\"\n\n\n'''\n\n\nwhere you need to use your query question to replace the {instruction}\n\n\nThere is another interesting repo about Arithmo-Mistral-7B at URL where they combine our MetaMathQA dataset and MathInstruct datasets to train a powerful model. Thanks agian for their contributions.\nWe would also try to train the combination of MetaMathQA and MathInstruct datasets, and also open all the results and training details.\n\n\nExperiments\n-----------\n\n\nModel: MPT-7B, GSM8k Pass@1: 6.8, MATH Pass@1: 3.0\nModel: Falcon-7B, GSM8k Pass@1: 6.8, MATH Pass@1: 2.3\nModel: LLaMA-1-7B, GSM8k Pass@1: 11.0, MATH Pass@1: 2.9\nModel: LLaMA-2-7B, GSM8k Pass@1: 14.6, MATH Pass@1: 2.5\nModel: MPT-30B, GSM8k Pass@1: 15.2, MATH Pass@1: 3.1\nModel: LLaMA-1-13B, GSM8k Pass@1: 17.8, MATH Pass@1: 3.9\nModel: GPT-Neo-2.7B, GSM8k Pass@1: 19.5, MATH Pass@1: --\nModel: Falcon-40B, GSM8k Pass@1: 19.6, MATH Pass@1: 2.5\nModel: Baichuan-chat-13B, GSM8k Pass@1: 23.9, MATH Pass@1: --\nModel: Vicuna-v1.3-13B, GSM8k Pass@1: 27.6, MATH Pass@1: --\nModel: LLaMA-2-13B, GSM8k Pass@1: 28.7, MATH Pass@1: 3.9\nModel: InternLM-7B, GSM8k Pass@1: 31.2, MATH Pass@1: --\nModel: ChatGLM-2-6B, GSM8k Pass@1: 32.4, MATH Pass@1: --\nModel: GPT-J-6B, GSM8k Pass@1: 34.9, MATH Pass@1: --\nModel: LLaMA-1-33B, GSM8k Pass@1: 35.6, MATH Pass@1: 3.9\nModel: LLaMA-2-34B, GSM8k Pass@1: 42.2, MATH Pass@1: 6.24\nModel: RFT-7B, GSM8k Pass@1: 50.3, MATH Pass@1: --\nModel: LLaMA-1-65B, GSM8k Pass@1: 50.9, MATH Pass@1: 10.6\nModel: Qwen-7B, GSM8k Pass@1: 51.6, MATH Pass@1: --\nModel: WizardMath-7B, GSM8k Pass@1: 54.9, MATH Pass@1: 10.7\nModel: LLaMA-2-70B, GSM8k Pass@1: 56.8, MATH Pass@1: 13.5\nModel: WizardMath-13B, GSM8k Pass@1: 63.9, MATH Pass@1: 14.0\nModel: MAmmoTH-7B (COT), GSM8k Pass@1: 50.5, MATH Pass@1: 10.4\nModel: MAmmoTH-7B (POT+COT), GSM8k Pass@1: 53.6, MATH Pass@1: 31.5\nModel: Arithmo-Mistral-7B, GSM8k Pass@1: 74.7, MATH Pass@1: 25.3\nModel: MetaMath-7B, GSM8k Pass@1: 66.5, MATH Pass@1: 19.8\nModel: MetaMath-13B, GSM8k Pass@1: 72.3, MATH Pass@1: 22.4\nModel: MetaMath-Mistral-7B, GSM8k Pass@1: 77.7, MATH Pass@1: 28.2\n\n\nWe encourage anyone to use our MetaMathQA datasets. We are very happy to see the following models trained by MetaMathQA achieve a very promising performance!\n\n\nOpenChat-3.5 (URL\n\n\nCausalLM (URL\n\n\nzephyr (URL\n\n\nZiya2 (URL"
] |
[
25,
861
] |
[
"passage: TAGS\n#license-mit #math #math-qa #arxiv-2309.12284 #region-us \n"
] |
beb539d83c5cd3fceea75fed640906fa57f76b88
|
# Dataset Card for "kinopoisk_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/kinopoisk_prompts
|
[
"region:us"
] |
2023-09-21T16:27:26+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 136177618, "num_examples": 36591}], "download_size": 68332043, "dataset_size": 136177618}}
|
2023-09-21T17:01:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "kinopoisk_prompts"
More Information needed
|
[
"# Dataset Card for \"kinopoisk_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"kinopoisk_prompts\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"kinopoisk_prompts\"\n\nMore Information needed"
] |
dd1a27fdb851b3471c6dda56d14b9380a9b26c17
|
# Dataset Card for "uner-ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mirfan899/uner-ner
|
[
"region:us"
] |
2023-09-21T16:27:32+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "DATE", "1": "DESIGNATION", "2": "LOCATION", "3": "NUMBER", "4": "O", "5": "ORGANIZATION", "6": "PERSON", "7": "TIME"}}}}], "splits": [{"name": "train", "num_bytes": 682695, "num_examples": 1145}, {"name": "validation", "num_bytes": 302036, "num_examples": 491}, {"name": "test", "num_bytes": 302036, "num_examples": 491}], "download_size": 0, "dataset_size": 1286767}}
|
2023-10-15T08:16:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "uner-ner"
More Information needed
|
[
"# Dataset Card for \"uner-ner\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"uner-ner\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"uner-ner\"\n\nMore Information needed"
] |
7adce1afa8ef7ce09d77d052c6168fc9696f91aa
|
# Dataset Card for "medical_qa_ru_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/medical_qa_ru_data
|
[
"region:us"
] |
2023-09-21T16:44:02+00:00
|
{"dataset_info": {"features": [{"name": "date", "dtype": "string"}, {"name": "categ", "dtype": "string"}, {"name": "theme", "dtype": "string"}, {"name": "desc", "dtype": "string"}, {"name": "ans", "dtype": "string"}, {"name": "spec10", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 268150120, "num_examples": 190335}], "download_size": 132020030, "dataset_size": 268150120}}
|
2023-09-21T16:44:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "medical_qa_ru_data"
More Information needed
|
[
"# Dataset Card for \"medical_qa_ru_data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"medical_qa_ru_data\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"medical_qa_ru_data\"\n\nMore Information needed"
] |
d15cf8802b22093df1635783b7cf02de9cc3f41f
|
# Dataset Card for "primary_icd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ricardosantoss/primary_icd
|
[
"region:us"
] |
2023-09-21T16:55:35+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "TEXT", "dtype": "string"}, {"name": "ICD9_CODE", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 390398482, "num_examples": 38701}, {"name": "test", "num_bytes": 50879443, "num_examples": 5000}, {"name": "validation", "num_bytes": 50320021, "num_examples": 5000}], "download_size": 258595856, "dataset_size": 491597946}}
|
2023-09-22T20:05:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "primary_icd"
More Information needed
|
[
"# Dataset Card for \"primary_icd\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"primary_icd\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"primary_icd\"\n\nMore Information needed"
] |
15cf57d4fe97e74b3ba01da8f03fe50c9d35d02e
|
# Purpose of the dataset
Fula is a language spoken in at least 16 countries with many dialectal variaties. However, The few NLP solutions that exist take only one dialect into account, often the Nigerian one.
This is a speech-text dataset for 8 dialectal varieties of Fula, allowing the full diversity of the Fula language to be taken into account in the development of NLP solutions.
# Fula varieties in this dataset
This dataset contains 8 varrieties:
- __Pulaar__: spoken in Senegal, Mauritania and West-Mali.
- __Pular__: spoken in Guinea.
- __Maacina__: spoken in the Center and East of Mali.
- __Liptako__: spoken in Burkina Faso and Niger.
- __Caka__: spoken in the Central Nigeria.
- __Bororro__: a very nomad group living in Cameroon, Central African Republic and Tchad.
- __Borgu__: spoken in Togo and Benin.
- __Adamawa__: spoken in Cameroon and South-East Nigeria.
# Sources of the data
Many of the corpora are from books automatically aligned using the [MMS Aligner](https://github.com/facebookresearch/fairseq/tree/main/examples/mms/data_prep).
You can check the script for scraping and aligning the corpora in the github repository [https://github.com/cawoylel/FulaSpeechCorpora](https://github.com/cawoylel/FulaSpeechCorpora)
For each variety, we give the source:
- __Pulaar__:
- We automatically align books from https://deftepulaar.com/
- We also added the Waxal Dataset from [https://huggingface.co/datasets/galsenai/waxal_dataset](https://huggingface.co/datasets/galsenai/waxal_dataset)
- __Pular__: We automatically align the bible books from [https://www.bible.com/bible/1798/MAT.1.VPFJ](https://www.bible.com/bible/1798/MAT.1.VPFJ)
- __Maacina__: We automatically align the bible books from [https://www.bible.com/bible/1175/MAT.1.FFM](https://www.bible.com/bible/1175/MAT.1.FFM)
- __Liptako__:
- We automatically align the bible books from [https://www.bible.com/bible/1032/MAT.1.FBFNT](https://www.bible.com/bible/1032/MAT.1.FBFNT)
- We added data from the dictionary [https://www.webonary.org/fulfuldeburkina/?lang=en](Fulfulde\_Webonary\_Dictionary)
- We scraped many pages from [https://media.ipsapps.org](https://media.ipsapps.org), example of page: [https://media.ipsapps.org/fuh/ora/co1/01-B001-001.html](https://media.ipsapps.org/fuh/ora/co1/01-B001-001.html)
- __Caka__: We automatically align the bible books from [https://www.bible.com/bible/1159/MAT.1.FUV](Bible)
- __Bororro__: We automatically align the bible books from [https://www.bible.com/bible/1373/MAT.1.FUQ](https://www.bible.com/bible/1373/MAT.1.FUQ)
- __Borgu__: We automatically align the bible books from [https://www.bible.com/bible/3088/MAT.1.BFB](https://www.bible.com/bible/3088/MAT.1.BFB)
- __Adamawa__: We automatically align the bible books from [https://www.bible.com/bible/906/MAT.1.FB](https://www.bible.com/bible/906/MAT.1.FB)
# How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Borgu config, simply specify the corresponding variety config name:
```python
from datasets import load_dataset
borgu_data = load_dataset("cawoylel/FulaSpeechCorpora", "borgu")
```
You can also load all the dataset:
```python
from datasets import load_dataset
data = load_dataset("cawoylel/FulaSpeechCorpora")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
data = load_dataset("cawoylel/FulaSpeechCorpora", streaming=True)
print(next(iter(data)))
```
# Data Fields
The data fields are the same among all splits.
- **dialect** (str): The name of the dialect
- **audio** (dict): Audio object including loaded audio array, sampling rate and path to audio
- **transcription** (str): Transcription of the audio file
# Social Impact of Dataset
As many African languages, Fula is under-represented in NLP solutions. The dataset aims to bring more linguistic diversity.
# Discussion of Biases
Les corpus sont principalement issus de livres lus et enregistrés en studio dans conditions non bruités et avec une certaine hyper-articulation des lecteurs. Les modèles entraînés avec ces données peuvent être moins robuste au bruit et à la parole instant.
Moreover, most of the speakers are adult males, which may pose problems for generalizing the models to other types of speakers.
# Limitations
Read speech, hyper-articulation, noise robustness, etc
# Additional Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
# Citation Information
Please cite us when using the FulaSpeechCorpora:
```
@article{fleurs2022arxiv,
title = {FulaSpeechCorpora: A multidialectal speech dataset for Fula.},
author = {Sy, Yaya and Doucouré, Dioula},
url = {https://huggingface.co/datasets/cawoylel/FulaSpeechCorpora},
year = {2023},
```
|
cawoylel/FulaSpeechCorpora
|
[
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"task_categories:audio-classification",
"size_categories:100K<n<1M",
"language:ff",
"speech",
"low-ressource",
"audio",
"region:us"
] |
2023-09-21T16:56:54+00:00
|
{"language": ["ff"], "size_categories": ["100K<n<1M"], "task_categories": ["automatic-speech-recognition", "text-to-speech", "audio-classification"], "pretty_name": "Fula Multidialectal Speech Corpora", "configs": [{"config_name": "default", "data_files": [{"split": "pulaar", "path": "data/pulaar-*"}, {"split": "maacina", "path": "data/maacina-*"}, {"split": "liptako", "path": "data/liptako-*"}, {"split": "caka", "path": "data/caka-*"}, {"split": "bororro", "path": "data/bororro-*"}, {"split": "borgu", "path": "data/borgu-*"}, {"split": "pular", "path": "data/pular-*"}, {"split": "adamawa", "path": "data/adamawa-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}, {"name": "dialect", "dtype": "string"}], "splits": [{"name": "pulaar", "num_bytes": 3398551955.96, "num_examples": 12880}, {"name": "maacina", "num_bytes": 2677353337.824, "num_examples": 14336}, {"name": "liptako", "num_bytes": 5858678478.536, "num_examples": 36828}, {"name": "caka", "num_bytes": 2790732470.205, "num_examples": 14865}, {"name": "bororro", "num_bytes": 2952498447.936, "num_examples": 15022}, {"name": "borgu", "num_bytes": 2849809213.278, "num_examples": 13387}, {"name": "pular", "num_bytes": 2339299211.055, "num_examples": 11779}, {"name": "adamawa", "num_bytes": 2225350403.136, "num_examples": 13504}], "download_size": 20035287564, "dataset_size": 25092273517.93}, "tags": ["speech", "low-ressource", "audio"]}
|
2023-11-24T11:25:42+00:00
|
[] |
[
"ff"
] |
TAGS
#task_categories-automatic-speech-recognition #task_categories-text-to-speech #task_categories-audio-classification #size_categories-100K<n<1M #language-Fulah #speech #low-ressource #audio #region-us
|
# Purpose of the dataset
Fula is a language spoken in at least 16 countries with many dialectal variaties. However, The few NLP solutions that exist take only one dialect into account, often the Nigerian one.
This is a speech-text dataset for 8 dialectal varieties of Fula, allowing the full diversity of the Fula language to be taken into account in the development of NLP solutions.
# Fula varieties in this dataset
This dataset contains 8 varrieties:
- __Pulaar__: spoken in Senegal, Mauritania and West-Mali.
- __Pular__: spoken in Guinea.
- __Maacina__: spoken in the Center and East of Mali.
- __Liptako__: spoken in Burkina Faso and Niger.
- __Caka__: spoken in the Central Nigeria.
- __Bororro__: a very nomad group living in Cameroon, Central African Republic and Tchad.
- __Borgu__: spoken in Togo and Benin.
- __Adamawa__: spoken in Cameroon and South-East Nigeria.
# Sources of the data
Many of the corpora are from books automatically aligned using the MMS Aligner.
You can check the script for scraping and aligning the corpora in the github repository URL
For each variety, we give the source:
- __Pulaar__:
- We automatically align books from URL
- We also added the Waxal Dataset from URL
- __Pular__: We automatically align the bible books from URL
- __Maacina__: We automatically align the bible books from URL
- __Liptako__:
- We automatically align the bible books from URL
- We added data from the dictionary URL
- We scraped many pages from URL, example of page: URL
- __Caka__: We automatically align the bible books from URL
- __Bororro__: We automatically align the bible books from URL
- __Borgu__: We automatically align the bible books from URL
- __Adamawa__: We automatically align the bible books from URL
# How to use
The 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function.
For example, to download the Borgu config, simply specify the corresponding variety config name:
You can also load all the dataset:
Using the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
# Data Fields
The data fields are the same among all splits.
- dialect (str): The name of the dialect
- audio (dict): Audio object including loaded audio array, sampling rate and path to audio
- transcription (str): Transcription of the audio file
# Social Impact of Dataset
As many African languages, Fula is under-represented in NLP solutions. The dataset aims to bring more linguistic diversity.
# Discussion of Biases
Les corpus sont principalement issus de livres lus et enregistrés en studio dans conditions non bruités et avec une certaine hyper-articulation des lecteurs. Les modèles entraînés avec ces données peuvent être moins robuste au bruit et à la parole instant.
Moreover, most of the speakers are adult males, which may pose problems for generalizing the models to other types of speakers.
# Limitations
Read speech, hyper-articulation, noise robustness, etc
# Additional Information
All datasets are licensed under the Creative Commons license (CC-BY).
Please cite us when using the FulaSpeechCorpora:
|
[
"# Purpose of the dataset\n\nFula is a language spoken in at least 16 countries with many dialectal variaties. However, The few NLP solutions that exist take only one dialect into account, often the Nigerian one.\nThis is a speech-text dataset for 8 dialectal varieties of Fula, allowing the full diversity of the Fula language to be taken into account in the development of NLP solutions.",
"# Fula varieties in this dataset\n\nThis dataset contains 8 varrieties:\n- __Pulaar__: spoken in Senegal, Mauritania and West-Mali.\n- __Pular__: spoken in Guinea.\n- __Maacina__: spoken in the Center and East of Mali.\n- __Liptako__: spoken in Burkina Faso and Niger.\n- __Caka__: spoken in the Central Nigeria.\n- __Bororro__: a very nomad group living in Cameroon, Central African Republic and Tchad.\n- __Borgu__: spoken in Togo and Benin.\n- __Adamawa__: spoken in Cameroon and South-East Nigeria.",
"# Sources of the data\n\nMany of the corpora are from books automatically aligned using the MMS Aligner.\nYou can check the script for scraping and aligning the corpora in the github repository URL\n\nFor each variety, we give the source:\n- __Pulaar__:\n - We automatically align books from URL\n - We also added the Waxal Dataset from URL\n- __Pular__: We automatically align the bible books from URL\n- __Maacina__: We automatically align the bible books from URL\n- __Liptako__:\n - We automatically align the bible books from URL\n - We added data from the dictionary URL\n - We scraped many pages from URL, example of page: URL\n- __Caka__: We automatically align the bible books from URL\n- __Bororro__: We automatically align the bible books from URL\n- __Borgu__: We automatically align the bible books from URL\n- __Adamawa__: We automatically align the bible books from URL",
"# How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Borgu config, simply specify the corresponding variety config name:\n\n\n\nYou can also load all the dataset:\n\n\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.",
"# Data Fields\n\nThe data fields are the same among all splits.\n- dialect (str): The name of the dialect\n- audio (dict): Audio object including loaded audio array, sampling rate and path to audio\n- transcription (str): Transcription of the audio file",
"# Social Impact of Dataset\n\nAs many African languages, Fula is under-represented in NLP solutions. The dataset aims to bring more linguistic diversity.",
"# Discussion of Biases\n\nLes corpus sont principalement issus de livres lus et enregistrés en studio dans conditions non bruités et avec une certaine hyper-articulation des lecteurs. Les modèles entraînés avec ces données peuvent être moins robuste au bruit et à la parole instant.\nMoreover, most of the speakers are adult males, which may pose problems for generalizing the models to other types of speakers.",
"# Limitations\n\nRead speech, hyper-articulation, noise robustness, etc",
"# Additional Information\n\nAll datasets are licensed under the Creative Commons license (CC-BY).\n\n\n\nPlease cite us when using the FulaSpeechCorpora:"
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #task_categories-text-to-speech #task_categories-audio-classification #size_categories-100K<n<1M #language-Fulah #speech #low-ressource #audio #region-us \n",
"# Purpose of the dataset\n\nFula is a language spoken in at least 16 countries with many dialectal variaties. However, The few NLP solutions that exist take only one dialect into account, often the Nigerian one.\nThis is a speech-text dataset for 8 dialectal varieties of Fula, allowing the full diversity of the Fula language to be taken into account in the development of NLP solutions.",
"# Fula varieties in this dataset\n\nThis dataset contains 8 varrieties:\n- __Pulaar__: spoken in Senegal, Mauritania and West-Mali.\n- __Pular__: spoken in Guinea.\n- __Maacina__: spoken in the Center and East of Mali.\n- __Liptako__: spoken in Burkina Faso and Niger.\n- __Caka__: spoken in the Central Nigeria.\n- __Bororro__: a very nomad group living in Cameroon, Central African Republic and Tchad.\n- __Borgu__: spoken in Togo and Benin.\n- __Adamawa__: spoken in Cameroon and South-East Nigeria.",
"# Sources of the data\n\nMany of the corpora are from books automatically aligned using the MMS Aligner.\nYou can check the script for scraping and aligning the corpora in the github repository URL\n\nFor each variety, we give the source:\n- __Pulaar__:\n - We automatically align books from URL\n - We also added the Waxal Dataset from URL\n- __Pular__: We automatically align the bible books from URL\n- __Maacina__: We automatically align the bible books from URL\n- __Liptako__:\n - We automatically align the bible books from URL\n - We added data from the dictionary URL\n - We scraped many pages from URL, example of page: URL\n- __Caka__: We automatically align the bible books from URL\n- __Bororro__: We automatically align the bible books from URL\n- __Borgu__: We automatically align the bible books from URL\n- __Adamawa__: We automatically align the bible books from URL",
"# How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Borgu config, simply specify the corresponding variety config name:\n\n\n\nYou can also load all the dataset:\n\n\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.",
"# Data Fields\n\nThe data fields are the same among all splits.\n- dialect (str): The name of the dialect\n- audio (dict): Audio object including loaded audio array, sampling rate and path to audio\n- transcription (str): Transcription of the audio file",
"# Social Impact of Dataset\n\nAs many African languages, Fula is under-represented in NLP solutions. The dataset aims to bring more linguistic diversity.",
"# Discussion of Biases\n\nLes corpus sont principalement issus de livres lus et enregistrés en studio dans conditions non bruités et avec une certaine hyper-articulation des lecteurs. Les modèles entraînés avec ces données peuvent être moins robuste au bruit et à la parole instant.\nMoreover, most of the speakers are adult males, which may pose problems for generalizing the models to other types of speakers.",
"# Limitations\n\nRead speech, hyper-articulation, noise robustness, etc",
"# Additional Information\n\nAll datasets are licensed under the Creative Commons license (CC-BY).\n\n\n\nPlease cite us when using the FulaSpeechCorpora:"
] |
[
75,
90,
157,
219,
163,
60,
37,
91,
18,
36
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #task_categories-text-to-speech #task_categories-audio-classification #size_categories-100K<n<1M #language-Fulah #speech #low-ressource #audio #region-us \n# Purpose of the dataset\n\nFula is a language spoken in at least 16 countries with many dialectal variaties. However, The few NLP solutions that exist take only one dialect into account, often the Nigerian one.\nThis is a speech-text dataset for 8 dialectal varieties of Fula, allowing the full diversity of the Fula language to be taken into account in the development of NLP solutions.# Fula varieties in this dataset\n\nThis dataset contains 8 varrieties:\n- __Pulaar__: spoken in Senegal, Mauritania and West-Mali.\n- __Pular__: spoken in Guinea.\n- __Maacina__: spoken in the Center and East of Mali.\n- __Liptako__: spoken in Burkina Faso and Niger.\n- __Caka__: spoken in the Central Nigeria.\n- __Bororro__: a very nomad group living in Cameroon, Central African Republic and Tchad.\n- __Borgu__: spoken in Togo and Benin.\n- __Adamawa__: spoken in Cameroon and South-East Nigeria."
] |
4bb727eae7873b3ee213de7c551eaf1135fb8793
|
# Dataset Card for "evol_70k_with_output_Xwin"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pvduy/evol_70k_with_output_Xwin
|
[
"region:us"
] |
2023-09-21T17:13:59+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 165817770, "num_examples": 70000}], "download_size": 79750128, "dataset_size": 165817770}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T17:14:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "evol_70k_with_output_Xwin"
More Information needed
|
[
"# Dataset Card for \"evol_70k_with_output_Xwin\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"evol_70k_with_output_Xwin\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"evol_70k_with_output_Xwin\"\n\nMore Information needed"
] |
7622b46af99c0fed757cbd9dd9d62952f1ed57f7
|
# Dataset of Tohsaka Rin (Fate Stay Night [UFOTABLE])
This is the dataset of Tohsaka Rin (Fate Stay Night [UFOTABLE]), containing 719 images and their tags.
The core tags of this character are `long_hair, black_hair, two_side_up, ribbon, hair_ribbon, blue_eyes, brown_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 719 | 610.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tohsaka_rin_fatestaynightufotable/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 719 | 474.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tohsaka_rin_fatestaynightufotable/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1381 | 876.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tohsaka_rin_fatestaynightufotable/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 719 | 609.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tohsaka_rin_fatestaynightufotable/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1381 | 1.05 GiB | [Download](https://huggingface.co/datasets/CyberHarem/tohsaka_rin_fatestaynightufotable/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tohsaka_rin_fatestaynightufotable',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, anime_coloring, homurahara_academy_school_uniform, looking_at_viewer, solo |
| 1 | 5 |  |  |  |  |  | 1girl, anime_coloring, homurahara_academy_school_uniform, profile, solo, from_side |
| 2 | 6 |  |  |  |  |  | 1girl, coat, red_jacket, solo, homurahara_academy_school_uniform, green_eyes |
| 3 | 12 |  |  |  |  |  | 1girl, anime_coloring, profile, solo, from_side |
| 4 | 10 |  |  |  |  |  | 1girl, orange_scarf, solo, anime_coloring, coat |
| 5 | 17 |  |  |  |  |  | 1girl, solo, orange_scarf, red_coat, upper_body, looking_at_viewer, black_ribbon |
| 6 | 28 |  |  |  |  |  | 1girl, black_thighhighs, skirt, solo, zettai_ryouiki, coat, orange_scarf, sitting |
| 7 | 9 |  |  |  |  |  | 1girl, black_skirt, black_thighhighs, orange_scarf, pleated_skirt, red_coat, solo, zettai_ryouiki, twintails, long_sleeves |
| 8 | 12 |  |  |  |  |  | 1girl, homurahara_academy_school_uniform, solo, black_pantyhose, skirt |
| 9 | 15 |  |  |  |  |  | 1girl, homurahara_academy_school_uniform, neck_ribbon, solo, white_shirt, vest, anime_coloring, black_ribbon |
| 10 | 5 |  |  |  |  |  | 1girl, solo |
| 11 | 5 |  |  |  |  |  | 1girl, black_ribbon, solo, upper_body, anime_coloring, closed_mouth, from_side, profile, red_sweater, hair_bow, black_bow |
| 12 | 6 |  |  |  |  |  | 1girl, anime_coloring, solo, turtleneck, bangs, black_bow, hair_bow, upper_body, black_ribbon, breasts, closed_mouth, red_sweater |
| 13 | 6 |  |  |  |  |  | 1girl, solo, turtleneck, anime_coloring, upper_body, sweater, looking_at_viewer, open_mouth |
| 14 | 10 |  |  |  |  |  | 1girl, black_skirt, long_sleeves, pleated_skirt, solo, red_sweater, turtleneck, black_thighhighs, breasts, looking_at_viewer, standing, zettai_ryouiki, black_ribbon, closed_mouth, indoors, miniskirt, frown, green_eyes, shirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | anime_coloring | homurahara_academy_school_uniform | looking_at_viewer | solo | profile | from_side | coat | red_jacket | green_eyes | orange_scarf | red_coat | upper_body | black_ribbon | black_thighhighs | skirt | zettai_ryouiki | sitting | black_skirt | pleated_skirt | twintails | long_sleeves | black_pantyhose | neck_ribbon | white_shirt | vest | closed_mouth | red_sweater | hair_bow | black_bow | turtleneck | bangs | breasts | sweater | open_mouth | standing | indoors | miniskirt | frown | shirt |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-----------------|:------------------------------------|:--------------------|:-------|:----------|:------------|:-------|:-------------|:-------------|:---------------|:-----------|:-------------|:---------------|:-------------------|:--------|:-----------------|:----------|:--------------|:----------------|:------------|:---------------|:------------------|:--------------|:--------------|:-------|:---------------|:--------------|:-----------|:------------|:-------------|:--------|:----------|:----------|:-------------|:-----------|:----------|:------------|:--------|:--------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | X | | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 12 |  |  |  |  |  | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 10 |  |  |  |  |  | X | X | | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 17 |  |  |  |  |  | X | | | X | X | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 28 |  |  |  |  |  | X | | | | X | | | X | | | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 9 |  |  |  |  |  | X | | | | X | | | | | | X | X | | | X | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 8 | 12 |  |  |  |  |  | X | | X | | X | | | | | | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | |
| 9 | 15 |  |  |  |  |  | X | X | X | | X | | | | | | | | | X | | | | | | | | | | X | X | X | | | | | | | | | | | | | | |
| 10 | 5 |  |  |  |  |  | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 11 | 5 |  |  |  |  |  | X | X | | | X | X | X | | | | | | X | X | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | |
| 12 | 6 |  |  |  |  |  | X | X | | | X | | | | | | | | X | X | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | |
| 13 | 6 |  |  |  |  |  | X | X | | X | X | | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | X | X | | | | | |
| 14 | 10 |  |  |  |  |  | X | | | X | X | | | | | X | | | | X | X | | X | | X | X | | X | | | | | X | X | | | X | | X | | | X | X | X | X | X |
|
CyberHarem/tohsaka_rin_fatestaynightufotable
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-21T17:18:26+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-23T03:33:32+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Tohsaka Rin (Fate Stay Night [UFOTABLE])
===================================================
This is the dataset of Tohsaka Rin (Fate Stay Night [UFOTABLE]), containing 719 images and their tags.
The core tags of this character are 'long\_hair, black\_hair, two\_side\_up, ribbon, hair\_ribbon, blue\_eyes, brown\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
a5f8d64a68971dd5578c66384f1dab0858362d6a
|
# Goodreads 100k
Clone of Manav Dhamani's [goodreads-books-100k](https://www.kaggle.com/datasets/mdhamani/goodreads-books-100k) dataset from Kaggle.
|
euclaise/goodreads_100k
|
[
"size_categories:10K<n<100K",
"license:cc0-1.0",
"region:us"
] |
2023-09-21T17:22:42+00:00
|
{"license": "cc0-1.0", "size_categories": ["10K<n<100K"], "dataset_info": {"features": [{"name": "author", "dtype": "string"}, {"name": "desc", "dtype": "string"}, {"name": "genre", "dtype": "string"}, {"name": "isbn", "dtype": "string"}, {"name": "link", "dtype": "string"}, {"name": "pages", "dtype": "int64"}, {"name": "rating", "dtype": "float64"}, {"name": "reviews", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "totalratings", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 111985794, "num_examples": 100000}], "download_size": 69614148, "dataset_size": 111985794}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T17:25:56+00:00
|
[] |
[] |
TAGS
#size_categories-10K<n<100K #license-cc0-1.0 #region-us
|
# Goodreads 100k
Clone of Manav Dhamani's goodreads-books-100k dataset from Kaggle.
|
[
"# Goodreads 100k\n\nClone of Manav Dhamani's goodreads-books-100k dataset from Kaggle."
] |
[
"TAGS\n#size_categories-10K<n<100K #license-cc0-1.0 #region-us \n",
"# Goodreads 100k\n\nClone of Manav Dhamani's goodreads-books-100k dataset from Kaggle."
] |
[
26,
28
] |
[
"passage: TAGS\n#size_categories-10K<n<100K #license-cc0-1.0 #region-us \n# Goodreads 100k\n\nClone of Manav Dhamani's goodreads-books-100k dataset from Kaggle."
] |
12833883274243b02191f4692b2f33932be25da3
|
# Dataset Card for "data_synthesis_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thanhduycao/data_synthesis_v1
|
[
"region:us"
] |
2023-09-21T17:22:57+00:00
|
{"dataset_info": {"features": [{"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "null"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "transcription", "dtype": "string"}, {"name": "old_transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10125909, "num_examples": 20}], "download_size": 2434457, "dataset_size": 10125909}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T23:45:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data_synthesis_v1"
More Information needed
|
[
"# Dataset Card for \"data_synthesis_v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data_synthesis_v1\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data_synthesis_v1\"\n\nMore Information needed"
] |
747f2fc4283a851046aab17ed916153800952b45
|
configs:
- config_name: default
data_files:
- split: train
path: "strain.json"
- split: test
path: "stets.json"
- split: validation
path: "sval.json"
|
DarrenLo/ygo_tcg_test
|
[
"region:us"
] |
2023-09-21T17:33:07+00:00
|
{}
|
2023-11-03T07:47:59+00:00
|
[] |
[] |
TAGS
#region-us
|
configs:
- config_name: default
data_files:
- split: train
path: "URL"
- split: test
path: "URL"
- split: validation
path: "URL"
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
8d050db338d3b0f0b9fa6b61fd1d9e8c50307b2b
|
# Dataset Card for "medical_qa_ru_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/medical_qa_ru_prompts
|
[
"region:us"
] |
2023-09-21T17:33:36+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 75314313, "num_examples": 80101}], "download_size": 38675521, "dataset_size": 75314313}}
|
2023-09-21T17:33:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "medical_qa_ru_prompts"
More Information needed
|
[
"# Dataset Card for \"medical_qa_ru_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"medical_qa_ru_prompts\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"medical_qa_ru_prompts\"\n\nMore Information needed"
] |
a5c680eea74375aa97b4d8aaf018464deeef21b6
|
# Dataset of Saber (Fate Stay Night [UFOTABLE])
This is the dataset of Saber (Fate Stay Night [UFOTABLE]), containing 323 images and their tags.
The core tags of this character are `blonde_hair, green_eyes, ahoge, ribbon, hair_ribbon, braid, blue_ribbon, short_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 323 | 260.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saber_fatestaynightufotable/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 323 | 207.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saber_fatestaynightufotable/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 655 | 406.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saber_fatestaynightufotable/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 323 | 259.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saber_fatestaynightufotable/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 655 | 485.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saber_fatestaynightufotable/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/saber_fatestaynightufotable',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 16 |  |  |  |  |  | 1girl, solo, parody, anime_coloring, white_shirt, looking_at_viewer, upper_body, french_braid |
| 1 | 14 |  |  |  |  |  | 1girl, solo, anime_coloring, parody, puffy_sleeves, armored_dress |
| 2 | 16 |  |  |  |  |  | 1girl, armored_dress, gauntlets, solo, sword, excalibur_(fate/stay_night) |
| 3 | 7 |  |  |  |  |  | 1girl, armored_dress, gauntlets, solo, sword, excalibur_(fate/stay_night), juliet_sleeves |
| 4 | 8 |  |  |  |  |  | 1girl, armored_dress, gauntlets, single_hair_bun, solo, juliet_sleeves, breastplate |
| 5 | 10 |  |  |  |  |  | 1girl, juliet_sleeves, single_hair_bun, solo, upper_body, profile, from_side, armored_dress, braided_bun, breastplate |
| 6 | 7 |  |  |  |  |  | 1girl, braided_bun, profile, solo, anime_coloring, armor, single_hair_bun, closed_mouth, from_side, parody |
| 7 | 8 |  |  |  |  |  | 1girl, single_hair_bun, solo, white_shirt, braided_bun, from_side, profile, sidelocks, anime_coloring, closed_mouth, collared_shirt, neck_ribbon, upper_body, bangs |
| 8 | 7 |  |  |  |  |  | 1girl, cloak, solo, hood_up, yellow_raincoat, armor, parody, upper_body |
| 9 | 10 |  |  |  |  |  | 1girl, solo, pantyhose, tatami, from_side, indoors, seiza, white_shirt, blue_skirt, cushion, single_hair_bun, long_sleeves |
| 10 | 20 |  |  |  |  |  | 1girl, blue_scarf, solo, official_alternate_costume, coat, skirt |
| 11 | 11 |  |  |  |  |  | 1girl, skirt, solo, pantyhose, shinai |
| 12 | 9 |  |  |  |  |  | 1girl, elbow_gloves, bare_shoulders, official_alternate_costume, white_gloves, solo, white_dress, choker, ponytail, strapless, anime_coloring, small_breasts |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | parody | anime_coloring | white_shirt | looking_at_viewer | upper_body | french_braid | puffy_sleeves | armored_dress | gauntlets | sword | excalibur_(fate/stay_night) | juliet_sleeves | single_hair_bun | breastplate | profile | from_side | braided_bun | armor | closed_mouth | sidelocks | collared_shirt | neck_ribbon | bangs | cloak | hood_up | yellow_raincoat | pantyhose | tatami | indoors | seiza | blue_skirt | cushion | long_sleeves | blue_scarf | official_alternate_costume | coat | skirt | shinai | elbow_gloves | bare_shoulders | white_gloves | white_dress | choker | ponytail | strapless | small_breasts |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-------|:---------|:-----------------|:--------------|:--------------------|:-------------|:---------------|:----------------|:----------------|:------------|:--------|:------------------------------|:-----------------|:------------------|:--------------|:----------|:------------|:--------------|:--------|:---------------|:------------|:-----------------|:--------------|:--------|:--------|:----------|:------------------|:------------|:---------|:----------|:--------|:-------------|:----------|:---------------|:-------------|:-----------------------------|:-------|:--------|:---------|:---------------|:-----------------|:---------------|:--------------|:---------|:-----------|:------------|:----------------|
| 0 | 16 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 14 |  |  |  |  |  | X | X | X | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 16 |  |  |  |  |  | X | X | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | X | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | X | | | | | | | | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 10 |  |  |  |  |  | X | X | | | | | X | | | X | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | X | X | X | | | | | | | | | | | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 8 |  |  |  |  |  | X | X | | X | X | | X | | | | | | | | X | | X | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 7 |  |  |  |  |  | X | X | X | | | | X | | | | | | | | | | | | | X | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 9 | 10 |  |  |  |  |  | X | X | | | X | | | | | | | | | | X | | | X | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 10 | 20 |  |  |  |  |  | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | |
| 11 | 11 |  |  |  |  |  | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | X | | | | | | | | |
| 12 | 9 |  |  |  |  |  | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | X | X | X | X | X | X | X |
|
CyberHarem/saber_fatestaynightufotable
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-21T17:51:55+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-28T12:15:42+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Saber (Fate Stay Night [UFOTABLE])
=============================================
This is the dataset of Saber (Fate Stay Night [UFOTABLE]), containing 323 images and their tags.
The core tags of this character are 'blonde\_hair, green\_eyes, ahoge, ribbon, hair\_ribbon, braid, blue\_ribbon, short\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
6a524f878113e130e4df6b56ee8bbf25f9f8ccfa
|
# Dataset Card for "writingprompts"
WritingPrompts dataset, as used in [Hierarchical Neural Story Generation](https://arxiv.org/pdf/1805.04833.pdf). Parsed from [the archive](https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz)
|
euclaise/writingprompts
|
[
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"arxiv:1805.04833",
"region:us"
] |
2023-09-21T17:53:34+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "story", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 858816216, "num_examples": 272600}, {"name": "test", "num_bytes": 47681276, "num_examples": 15138}, {"name": "validation", "num_bytes": 48904993, "num_examples": 15620}], "download_size": 605049830, "dataset_size": 955402485}}
|
2023-09-21T18:12:16+00:00
|
[
"1805.04833"
] |
[
"en"
] |
TAGS
#size_categories-100K<n<1M #language-English #license-mit #arxiv-1805.04833 #region-us
|
# Dataset Card for "writingprompts"
WritingPrompts dataset, as used in Hierarchical Neural Story Generation. Parsed from the archive
|
[
"# Dataset Card for \"writingprompts\"\n\nWritingPrompts dataset, as used in Hierarchical Neural Story Generation. Parsed from the archive"
] |
[
"TAGS\n#size_categories-100K<n<1M #language-English #license-mit #arxiv-1805.04833 #region-us \n",
"# Dataset Card for \"writingprompts\"\n\nWritingPrompts dataset, as used in Hierarchical Neural Story Generation. Parsed from the archive"
] |
[
36,
35
] |
[
"passage: TAGS\n#size_categories-100K<n<1M #language-English #license-mit #arxiv-1805.04833 #region-us \n# Dataset Card for \"writingprompts\"\n\nWritingPrompts dataset, as used in Hierarchical Neural Story Generation. Parsed from the archive"
] |
398ce053096475671e8d3872b8e5b28b59f8fcec
|
# Glaive-code-assistant
Glaive-code-assistant is a dataset of ~140k code problems and solutions generated using Glaive’s synthetic data generation platform.
The data is intended to be used to make models act as code assistants, and so the data is structured in a QA format where the questions are worded similar to how real users will ask code related questions.
The data has ~60% python samples.
To report any problems or suggestions in the data, join the [Glaive discord](https://discord.gg/fjQ4uf3yWD)
|
glaiveai/glaive-code-assistant
|
[
"size_categories:100K<n<1M",
"license:apache-2.0",
"region:us"
] |
2023-09-21T17:56:47+00:00
|
{"license": "apache-2.0", "size_categories": ["100K<n<1M"]}
|
2023-09-27T21:51:02+00:00
|
[] |
[] |
TAGS
#size_categories-100K<n<1M #license-apache-2.0 #region-us
|
# Glaive-code-assistant
Glaive-code-assistant is a dataset of ~140k code problems and solutions generated using Glaive’s synthetic data generation platform.
The data is intended to be used to make models act as code assistants, and so the data is structured in a QA format where the questions are worded similar to how real users will ask code related questions.
The data has ~60% python samples.
To report any problems or suggestions in the data, join the Glaive discord
|
[
"# Glaive-code-assistant\n\nGlaive-code-assistant is a dataset of ~140k code problems and solutions generated using Glaive’s synthetic data generation platform.\n\nThe data is intended to be used to make models act as code assistants, and so the data is structured in a QA format where the questions are worded similar to how real users will ask code related questions.\n\nThe data has ~60% python samples.\n\nTo report any problems or suggestions in the data, join the Glaive discord"
] |
[
"TAGS\n#size_categories-100K<n<1M #license-apache-2.0 #region-us \n",
"# Glaive-code-assistant\n\nGlaive-code-assistant is a dataset of ~140k code problems and solutions generated using Glaive’s synthetic data generation platform.\n\nThe data is intended to be used to make models act as code assistants, and so the data is structured in a QA format where the questions are worded similar to how real users will ask code related questions.\n\nThe data has ~60% python samples.\n\nTo report any problems or suggestions in the data, join the Glaive discord"
] |
[
26,
115
] |
[
"passage: TAGS\n#size_categories-100K<n<1M #license-apache-2.0 #region-us \n# Glaive-code-assistant\n\nGlaive-code-assistant is a dataset of ~140k code problems and solutions generated using Glaive’s synthetic data generation platform.\n\nThe data is intended to be used to make models act as code assistants, and so the data is structured in a QA format where the questions are worded similar to how real users will ask code related questions.\n\nThe data has ~60% python samples.\n\nTo report any problems or suggestions in the data, join the Glaive discord"
] |
f260b6680ee5eb1b1801e3f0efc3eb6506e124b0
|
# Dataset Card for "OpenFulaSpeechCorpora"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
cawoylel/OpenFulaSpeechCorpora
|
[
"region:us"
] |
2023-09-21T18:02:25+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "pulaar", "path": "data/pulaar-*"}, {"split": "liptako", "path": "data/liptako-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}, {"name": "dialect", "dtype": "string"}], "splits": [{"name": "pulaar", "num_bytes": 3398551955.96, "num_examples": 12880}, {"name": "liptako", "num_bytes": 490660761.51, "num_examples": 10397}], "download_size": 3084439394, "dataset_size": 3889212717.4700003}}
|
2023-09-21T18:05:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "OpenFulaSpeechCorpora"
More Information needed
|
[
"# Dataset Card for \"OpenFulaSpeechCorpora\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"OpenFulaSpeechCorpora\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"OpenFulaSpeechCorpora\"\n\nMore Information needed"
] |
fa75286fc1d08512b417fa0ba5c22b1d89d936b6
|
# Dataset Card for "Skip_NoClip_Data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
umm-maybe/Skip_NoClip_Data
|
[
"region:us"
] |
2023-09-21T18:09:08+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "Unnamed: 1", "dtype": "int64"}, {"name": "subreddit", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "selftext", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "linktext", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "comments", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2611178, "num_examples": 5397}, {"name": "test", "num_bytes": 275187, "num_examples": 583}], "download_size": 1810839, "dataset_size": 2886365}}
|
2023-09-21T20:34:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Skip_NoClip_Data"
More Information needed
|
[
"# Dataset Card for \"Skip_NoClip_Data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Skip_NoClip_Data\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Skip_NoClip_Data\"\n\nMore Information needed"
] |
28e4ae599e68f0656b0ca2fd6fbbec3c96b7eda1
|
# Dataset of Matou Sakura (Fate Stay Night [UFOTABLE])
This is the dataset of Matou Sakura (Fate Stay Night [UFOTABLE]), containing 163 images and their tags.
The core tags of this character are `purple_hair, long_hair, ribbon, hair_ribbon, purple_eyes, red_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 163 | 152.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matou_sakura_fatestaynightufotable/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 163 | 118.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matou_sakura_fatestaynightufotable/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 332 | 224.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matou_sakura_fatestaynightufotable/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 163 | 152.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matou_sakura_fatestaynightufotable/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 332 | 274.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matou_sakura_fatestaynightufotable/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/matou_sakura_fatestaynightufotable',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, solo, anime_coloring, parody, open_mouth |
| 1 | 12 |  |  |  |  |  | 1girl, homurahara_academy_school_uniform, solo |
| 2 | 7 |  |  |  |  |  | 1girl, homurahara_academy_school_uniform, pink_apron, solo |
| 3 | 6 |  |  |  |  |  | 1girl, homurahara_academy_school_uniform, solo, looking_at_viewer |
| 4 | 8 |  |  |  |  |  | 1girl, serafuku, solo, empty_eyes, anime_coloring, short_hair, upper_body |
| 5 | 7 |  |  |  |  |  | 1girl, solo, bowl, food, cardigan, holding_chopsticks, pink_jacket, table |
| 6 | 5 |  |  |  |  |  | 1girl, elbow_gloves, solo, white_dress, white_gloves, open_mouth, bangs, blush, cleavage, collarbone, necklace, puffy_short_sleeves, smile, blurry_background, large_breasts, night, tree |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | anime_coloring | parody | open_mouth | homurahara_academy_school_uniform | pink_apron | looking_at_viewer | serafuku | empty_eyes | short_hair | upper_body | bowl | food | cardigan | holding_chopsticks | pink_jacket | table | elbow_gloves | white_dress | white_gloves | bangs | blush | cleavage | collarbone | necklace | puffy_short_sleeves | smile | blurry_background | large_breasts | night | tree |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------------|:---------|:-------------|:------------------------------------|:-------------|:--------------------|:-----------|:-------------|:-------------|:-------------|:-------|:-------|:-----------|:---------------------|:--------------|:--------|:---------------|:--------------|:---------------|:--------|:--------|:-----------|:-------------|:-----------|:----------------------|:--------|:--------------------|:----------------|:--------|:-------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | X | X | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | X | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | X | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/matou_sakura_fatestaynightufotable
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-21T18:10:40+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-28T12:48:22+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Matou Sakura (Fate Stay Night [UFOTABLE])
====================================================
This is the dataset of Matou Sakura (Fate Stay Night [UFOTABLE]), containing 163 images and their tags.
The core tags of this character are 'purple\_hair, long\_hair, ribbon, hair\_ribbon, purple\_eyes, red\_ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
d58773f554758b341232db6b836fb2c8190860fd
|
# Dataset of Illyasviel Von Einzbern (Fate Stay Night [UFOTABLE])
This is the dataset of Illyasviel Von Einzbern (Fate Stay Night [UFOTABLE]), containing 95 images and their tags.
The core tags of this character are `long_hair, white_hair, red_eyes, bangs, hair_between_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 95 | 88.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/illyasviel_von_einzbern_fatestaynightufotable/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 95 | 69.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/illyasviel_von_einzbern_fatestaynightufotable/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 220 | 149.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/illyasviel_von_einzbern_fatestaynightufotable/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 95 | 88.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/illyasviel_von_einzbern_fatestaynightufotable/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 220 | 178.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/illyasviel_von_einzbern_fatestaynightufotable/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/illyasviel_von_einzbern_fatestaynightufotable',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, papakha, purple_headwear, solo, white_scarf, closed_mouth, purple_capelet, smile, upper_body, looking_at_viewer, purple_coat, blurry_background, outdoors |
| 1 | 27 |  |  |  |  |  | 1girl, solo, parody, upper_body, ascot, anime_coloring, purple_shirt |
| 2 | 10 |  |  |  |  |  | 1girl, solo, white_skirt, ascot, long_sleeves, pleated_skirt, purple_shirt |
| 3 | 6 |  |  |  |  |  | 1girl, solo, blood_on_face |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | papakha | purple_headwear | solo | white_scarf | closed_mouth | purple_capelet | smile | upper_body | looking_at_viewer | purple_coat | blurry_background | outdoors | parody | ascot | anime_coloring | purple_shirt | white_skirt | long_sleeves | pleated_skirt | blood_on_face |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:------------------|:-------|:--------------|:---------------|:-----------------|:--------|:-------------|:--------------------|:--------------|:--------------------|:-----------|:---------|:--------|:-----------------|:---------------|:--------------|:---------------|:----------------|:----------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 1 | 27 |  |  |  |  |  | X | | | X | | | | | X | | | | | X | X | X | X | | | | |
| 2 | 10 |  |  |  |  |  | X | | | X | | | | | | | | | | | X | | X | X | X | X | |
| 3 | 6 |  |  |  |  |  | X | | | X | | | | | | | | | | | | | | | | | X |
|
CyberHarem/illyasviel_von_einzbern_fatestaynightufotable
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-21T18:25:15+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-28T12:28:46+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Illyasviel Von Einzbern (Fate Stay Night [UFOTABLE])
===============================================================
This is the dataset of Illyasviel Von Einzbern (Fate Stay Night [UFOTABLE]), containing 95 images and their tags.
The core tags of this character are 'long\_hair, white\_hair, red\_eyes, bangs, hair\_between\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
9a931026082c4bbe6780486e3dfeadd554a16811
|
# Bangumi Image Base of Akiba Meido Sensou
This is the image base of bangumi Akiba Meido Sensou, we detected 48 characters, 2198 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 87 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 185 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 39 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 70 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 169 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 314 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 29 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 16 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 28 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 24 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 37 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 21 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 60 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 31 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 35 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 158 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 16 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 13 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 9 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 12 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 33 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 85 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 34 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 9 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 10 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 8 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 10 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 23 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 21 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 38 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 6 | [Download](30/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 31 | 7 | [Download](31/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 32 | 9 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 16 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 11 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 15 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 7 | [Download](36/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 37 | 28 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 136 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 14 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 9 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 9 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 22 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 10 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 11 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 5 | [Download](45/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 46 | 7 | [Download](46/dataset.zip) |  |  |  |  |  |  |  | N/A |
| noise | 252 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/akibameidosensou
|
[
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] |
2023-09-21T18:30:55+00:00
|
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
|
2023-09-29T09:03:29+00:00
|
[] |
[] |
TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
|
Bangumi Image Base of Akiba Meido Sensou
========================================
This is the image base of bangumi Akiba Meido Sensou, we detected 48 characters, 2198 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
|
[] |
[
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
[
25
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
aefd556de6e3c4ff3ea89f55aec31a8f90912672
|
# Dataset Card for "joke_explaination"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/joke_explaination
|
[
"region:us"
] |
2023-09-21T18:41:17+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "joke", "dtype": "string"}, {"name": "explaination", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 262894, "num_examples": 377}], "download_size": 143161, "dataset_size": 262894}}
|
2023-09-21T18:41:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "joke_explaination"
More Information needed
|
[
"# Dataset Card for \"joke_explaination\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"joke_explaination\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"joke_explaination\"\n\nMore Information needed"
] |
d3806a8f7f856440aaf0978b47b13148905965bd
|
# Dataset Card for "joke_explaination_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/joke_explaination_prompts
|
[
"region:us"
] |
2023-09-21T18:42:38+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "explaination", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 194768, "num_examples": 364}], "download_size": 110662, "dataset_size": 194768}}
|
2023-09-21T18:42:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "joke_explaination_prompts"
More Information needed
|
[
"# Dataset Card for \"joke_explaination_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"joke_explaination_prompts\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"joke_explaination_prompts\"\n\nMore Information needed"
] |
8682c3f1a968bb2e810dbbb4619e9e48c6152cf5
|
# Dataset Card for Evaluation run of lgaalves/gpt-2-xl_camel-ai-physics
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/lgaalves/gpt-2-xl_camel-ai-physics
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [lgaalves/gpt-2-xl_camel-ai-physics](https://huggingface.co/lgaalves/gpt-2-xl_camel-ai-physics) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lgaalves__gpt-2-xl_camel-ai-physics",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T20:38:31.656182](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__gpt-2-xl_camel-ai-physics/blob/main/results_2023-10-25T20-38-31.656182.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.002202181208053691,
"em_stderr": 0.0004800510816619256,
"f1": 0.05571623322147659,
"f1_stderr": 0.001366603872793856,
"acc": 0.28844560078459863,
"acc_stderr": 0.007481836249406744
},
"harness|drop|3": {
"em": 0.002202181208053691,
"em_stderr": 0.0004800510816619256,
"f1": 0.05571623322147659,
"f1_stderr": 0.001366603872793856
},
"harness|gsm8k|5": {
"acc": 0.001516300227445034,
"acc_stderr": 0.001071779348549263
},
"harness|winogrande|5": {
"acc": 0.5753749013417522,
"acc_stderr": 0.013891893150264225
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_lgaalves__gpt-2-xl_camel-ai-physics
|
[
"region:us"
] |
2023-09-21T18:46:25+00:00
|
{"pretty_name": "Evaluation run of lgaalves/gpt-2-xl_camel-ai-physics", "dataset_summary": "Dataset automatically created during the evaluation run of model [lgaalves/gpt-2-xl_camel-ai-physics](https://huggingface.co/lgaalves/gpt-2-xl_camel-ai-physics) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lgaalves__gpt-2-xl_camel-ai-physics\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T20:38:31.656182](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__gpt-2-xl_camel-ai-physics/blob/main/results_2023-10-25T20-38-31.656182.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.002202181208053691,\n \"em_stderr\": 0.0004800510816619256,\n \"f1\": 0.05571623322147659,\n \"f1_stderr\": 0.001366603872793856,\n \"acc\": 0.28844560078459863,\n \"acc_stderr\": 0.007481836249406744\n },\n \"harness|drop|3\": {\n \"em\": 0.002202181208053691,\n \"em_stderr\": 0.0004800510816619256,\n \"f1\": 0.05571623322147659,\n \"f1_stderr\": 0.001366603872793856\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.001516300227445034,\n \"acc_stderr\": 0.001071779348549263\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5753749013417522,\n \"acc_stderr\": 0.013891893150264225\n }\n}\n```", "repo_url": "https://huggingface.co/lgaalves/gpt-2-xl_camel-ai-physics", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|arc:challenge|25_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T20_38_31.656182", "path": ["**/details_harness|drop|3_2023-10-25T20-38-31.656182.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T20-38-31.656182.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T20_38_31.656182", "path": ["**/details_harness|gsm8k|5_2023-10-25T20-38-31.656182.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T20-38-31.656182.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hellaswag|10_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-21T19-46-11.375703.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-21T19-46-11.375703.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-21T19-46-11.375703.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T20_38_31.656182", "path": ["**/details_harness|winogrande|5_2023-10-25T20-38-31.656182.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T20-38-31.656182.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_21T19_46_11.375703", "path": ["results_2023-09-21T19-46-11.375703.parquet"]}, {"split": "2023_10_25T20_38_31.656182", "path": ["results_2023-10-25T20-38-31.656182.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T20-38-31.656182.parquet"]}]}]}
|
2023-10-25T19:38:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of lgaalves/gpt-2-xl_camel-ai-physics
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model lgaalves/gpt-2-xl_camel-ai-physics on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T20:38:31.656182(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of lgaalves/gpt-2-xl_camel-ai-physics",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model lgaalves/gpt-2-xl_camel-ai-physics on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T20:38:31.656182(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of lgaalves/gpt-2-xl_camel-ai-physics",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model lgaalves/gpt-2-xl_camel-ai-physics on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T20:38:31.656182(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
28,
31,
176,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of lgaalves/gpt-2-xl_camel-ai-physics## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model lgaalves/gpt-2-xl_camel-ai-physics on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T20:38:31.656182(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
d02b766ac1d892facc78e56a23dab7261f8cd335
|
# Dataset Card for "04554133"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/04554133
|
[
"region:us"
] |
2023-09-21T18:46:51+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 167, "num_examples": 10}], "download_size": 1328, "dataset_size": 167}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T18:46:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "04554133"
More Information needed
|
[
"# Dataset Card for \"04554133\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"04554133\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"04554133\"\n\nMore Information needed"
] |
9a0f394aed70e24d9a8128cb657430ca030412a5
|
# Dataset Card for Evaluation run of speechlessai/speechless-llama2-dolphin-orca-platypus-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/speechlessai/speechless-llama2-dolphin-orca-platypus-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [speechlessai/speechless-llama2-dolphin-orca-platypus-13b](https://huggingface.co/speechlessai/speechless-llama2-dolphin-orca-platypus-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_speechlessai__speechless-llama2-dolphin-orca-platypus-13b_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-07T04:32:16.848860](https://huggingface.co/datasets/open-llm-leaderboard/details_speechlessai__speechless-llama2-dolphin-orca-platypus-13b_public/blob/main/results_2023-11-07T04-32-16.848860.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.3225671140939597,
"em_stderr": 0.004787213906850376,
"f1": 0.3723804530201348,
"f1_stderr": 0.004674058489948766,
"acc": 0.4344726727873176,
"acc_stderr": 0.00997339204610916
},
"harness|drop|3": {
"em": 0.3225671140939597,
"em_stderr": 0.004787213906850376,
"f1": 0.3723804530201348,
"f1_stderr": 0.004674058489948766
},
"harness|gsm8k|5": {
"acc": 0.09704321455648218,
"acc_stderr": 0.008153768274554725
},
"harness|winogrande|5": {
"acc": 0.7719021310181531,
"acc_stderr": 0.011793015817663595
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_speechlessai__speechless-llama2-dolphin-orca-platypus-13b
|
[
"region:us"
] |
2023-09-21T18:48:06+00:00
|
{"pretty_name": "Evaluation run of speechlessai/speechless-llama2-dolphin-orca-platypus-13b", "dataset_summary": "Dataset automatically created during the evaluation run of model [speechlessai/speechless-llama2-dolphin-orca-platypus-13b](https://huggingface.co/speechlessai/speechless-llama2-dolphin-orca-platypus-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_speechlessai__speechless-llama2-dolphin-orca-platypus-13b_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-07T04:32:16.848860](https://huggingface.co/datasets/open-llm-leaderboard/details_speechlessai__speechless-llama2-dolphin-orca-platypus-13b_public/blob/main/results_2023-11-07T04-32-16.848860.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.3225671140939597,\n \"em_stderr\": 0.004787213906850376,\n \"f1\": 0.3723804530201348,\n \"f1_stderr\": 0.004674058489948766,\n \"acc\": 0.4344726727873176,\n \"acc_stderr\": 0.00997339204610916\n },\n \"harness|drop|3\": {\n \"em\": 0.3225671140939597,\n \"em_stderr\": 0.004787213906850376,\n \"f1\": 0.3723804530201348,\n \"f1_stderr\": 0.004674058489948766\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09704321455648218,\n \"acc_stderr\": 0.008153768274554725\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7719021310181531,\n \"acc_stderr\": 0.011793015817663595\n }\n}\n```", "repo_url": "https://huggingface.co/speechlessai/speechless-llama2-dolphin-orca-platypus-13b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_04T23_49_39.978628", "path": ["**/details_harness|drop|3_2023-11-04T23-49-39.978628.parquet"]}, {"split": "2023_11_07T04_32_16.848860", "path": ["**/details_harness|drop|3_2023-11-07T04-32-16.848860.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-07T04-32-16.848860.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_04T23_49_39.978628", "path": ["**/details_harness|gsm8k|5_2023-11-04T23-49-39.978628.parquet"]}, {"split": "2023_11_07T04_32_16.848860", "path": ["**/details_harness|gsm8k|5_2023-11-07T04-32-16.848860.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-07T04-32-16.848860.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_04T23_49_39.978628", "path": ["**/details_harness|winogrande|5_2023-11-04T23-49-39.978628.parquet"]}, {"split": "2023_11_07T04_32_16.848860", "path": ["**/details_harness|winogrande|5_2023-11-07T04-32-16.848860.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-07T04-32-16.848860.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_04T23_49_39.978628", "path": ["results_2023-11-04T23-49-39.978628.parquet"]}, {"split": "2023_11_07T04_32_16.848860", "path": ["results_2023-11-07T04-32-16.848860.parquet"]}, {"split": "latest", "path": ["results_2023-11-07T04-32-16.848860.parquet"]}]}]}
|
2023-12-01T14:09:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of speechlessai/speechless-llama2-dolphin-orca-platypus-13b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model speechlessai/speechless-llama2-dolphin-orca-platypus-13b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-07T04:32:16.848860(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of speechlessai/speechless-llama2-dolphin-orca-platypus-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model speechlessai/speechless-llama2-dolphin-orca-platypus-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-07T04:32:16.848860(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of speechlessai/speechless-llama2-dolphin-orca-platypus-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model speechlessai/speechless-llama2-dolphin-orca-platypus-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-07T04:32:16.848860(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
32,
31,
181,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of speechlessai/speechless-llama2-dolphin-orca-platypus-13b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model speechlessai/speechless-llama2-dolphin-orca-platypus-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-07T04:32:16.848860(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
eff00bae59fb4938b06c22720e4d6371ff666708
|
# Dataset Card for "rw_2308_filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shubhamagarwal92/rw_2308_filtered
|
[
"region:us"
] |
2023-09-21T18:54:01+00:00
|
{"dataset_info": {"features": [{"name": "aid", "dtype": "string"}, {"name": "mid", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "corpusid", "dtype": "int64"}, {"name": "text_except_rw", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "related_work", "dtype": "string"}, {"name": "original_related_work", "dtype": "string"}, {"name": "ref_abstract", "struct": [{"name": "abstract", "sequence": "string"}, {"name": "cite_N", "sequence": "string"}, {"name": "corpursid", "sequence": "string"}]}, {"name": "ref_abstract_original", "struct": [{"name": "abstract", "sequence": "string"}, {"name": "cite_N", "sequence": "string"}, {"name": "corpursid", "sequence": "string"}]}, {"name": "ref_abstract_full_text", "struct": [{"name": "abstract", "sequence": "string"}, {"name": "all_para_text", "sequence": "string"}, {"name": "cite_N", "sequence": "string"}, {"name": "corpursid", "sequence": "string"}]}, {"name": "ref_abstract_full_text_original", "struct": [{"name": "abstract", "sequence": "string"}, {"name": "all_para_text", "sequence": "string"}, {"name": "cite_N", "sequence": "string"}, {"name": "corpursid", "sequence": "string"}]}, {"name": "total_cites", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 254996014, "num_examples": 1000}], "download_size": 106899160, "dataset_size": 254996014}}
|
2023-09-21T19:48:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "rw_2308_filtered"
More Information needed
|
[
"# Dataset Card for \"rw_2308_filtered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"rw_2308_filtered\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"rw_2308_filtered\"\n\nMore Information needed"
] |
7fa4727046b0c6b651dcce80c3a440fb6abc428f
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Meta-Review dataset is a dataset created based on the ORSUM dataset proposed in the paper "Meta-review Generation with Checklist-guided Iterative Introspection" by Zeng et al. Downloaded from their official GitHub Repo: https://github.com/Mankeerat/orsum-meta-review-generation
### Supported Tasks and Leaderboards
Multi-Document Summarization
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
zqz979/meta-review
|
[
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] |
2023-09-21T18:57:48+00:00
|
{"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["summarization"]}
|
2023-10-15T01:52:16+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-summarization #size_categories-10K<n<100K #language-English #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
The Meta-Review dataset is a dataset created based on the ORSUM dataset proposed in the paper "Meta-review Generation with Checklist-guided Iterative Introspection" by Zeng et al. Downloaded from their official GitHub Repo: URL
### Supported Tasks and Leaderboards
Multi-Document Summarization
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThe Meta-Review dataset is a dataset created based on the ORSUM dataset proposed in the paper \"Meta-review Generation with Checklist-guided Iterative Introspection\" by Zeng et al. Downloaded from their official GitHub Repo: URL",
"### Supported Tasks and Leaderboards\n\nMulti-Document Summarization",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-summarization #size_categories-10K<n<100K #language-English #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThe Meta-Review dataset is a dataset created based on the ORSUM dataset proposed in the paper \"Meta-review Generation with Checklist-guided Iterative Introspection\" by Zeng et al. Downloaded from their official GitHub Repo: URL",
"### Supported Tasks and Leaderboards\n\nMulti-Document Summarization",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
32,
8,
24,
68,
16,
5,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-summarization #size_categories-10K<n<100K #language-English #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThe Meta-Review dataset is a dataset created based on the ORSUM dataset proposed in the paper \"Meta-review Generation with Checklist-guided Iterative Introspection\" by Zeng et al. Downloaded from their official GitHub Repo: URL### Supported Tasks and Leaderboards\n\nMulti-Document Summarization### Languages\n\nEnglish## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
98a89af3ecaaa67f12c288cc2476864103d7950b
|
# Dataset Card for Evaluation run of marcchew/Marcoroni-7B-LaMini-80K
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/marcchew/Marcoroni-7B-LaMini-80K
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [marcchew/Marcoroni-7B-LaMini-80K](https://huggingface.co/marcchew/Marcoroni-7B-LaMini-80K) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_marcchew__Marcoroni-7B-LaMini-80K",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T19:17:19.895055](https://huggingface.co/datasets/open-llm-leaderboard/details_marcchew__Marcoroni-7B-LaMini-80K/blob/main/results_2023-12-03T19-17-19.895055.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_marcchew__Marcoroni-7B-LaMini-80K
|
[
"region:us"
] |
2023-09-21T19:12:37+00:00
|
{"pretty_name": "Evaluation run of marcchew/Marcoroni-7B-LaMini-80K", "dataset_summary": "Dataset automatically created during the evaluation run of model [marcchew/Marcoroni-7B-LaMini-80K](https://huggingface.co/marcchew/Marcoroni-7B-LaMini-80K) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_marcchew__Marcoroni-7B-LaMini-80K\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T19:17:19.895055](https://huggingface.co/datasets/open-llm-leaderboard/details_marcchew__Marcoroni-7B-LaMini-80K/blob/main/results_2023-12-03T19-17-19.895055.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/marcchew/Marcoroni-7B-LaMini-80K", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|arc:challenge|25_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T08_02_52.884764", "path": ["**/details_harness|drop|3_2023-10-24T08-02-52.884764.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T08-02-52.884764.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T08_02_52.884764", "path": ["**/details_harness|gsm8k|5_2023-10-24T08-02-52.884764.parquet"]}, {"split": "2023_12_03T19_17_19.895055", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-17-19.895055.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-17-19.895055.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hellaswag|10_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-21T20-12-12.451376.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-21T20-12-12.451376.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-21T20-12-12.451376.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T08_02_52.884764", "path": ["**/details_harness|winogrande|5_2023-10-24T08-02-52.884764.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T08-02-52.884764.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_21T20_12_12.451376", "path": ["results_2023-09-21T20-12-12.451376.parquet"]}, {"split": "2023_10_24T08_02_52.884764", "path": ["results_2023-10-24T08-02-52.884764.parquet"]}, {"split": "2023_12_03T19_17_19.895055", "path": ["results_2023-12-03T19-17-19.895055.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T19-17-19.895055.parquet"]}]}]}
|
2023-12-03T19:17:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of marcchew/Marcoroni-7B-LaMini-80K
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model marcchew/Marcoroni-7B-LaMini-80K on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T19:17:19.895055(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of marcchew/Marcoroni-7B-LaMini-80K",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model marcchew/Marcoroni-7B-LaMini-80K on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:17:19.895055(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of marcchew/Marcoroni-7B-LaMini-80K",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model marcchew/Marcoroni-7B-LaMini-80K on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:17:19.895055(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
172,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of marcchew/Marcoroni-7B-LaMini-80K## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model marcchew/Marcoroni-7B-LaMini-80K on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T19:17:19.895055(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
60d4bff598f9b729823bdcef9d6ff1b5cbd533fe
|
# Dataset Card for "DirtyWritingPrompts"
Data collected from r/DirtyWritingPrompts, up to 12-2022, from PushShift.
|
euclaise/DirtyWritingPrompts
|
[
"license:cc0-1.0",
"not-for-all-audiences",
"region:us"
] |
2023-09-21T19:12:48+00:00
|
{"license": "cc0-1.0", "dataset_info": {"features": [{"name": "post_title", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "gilded", "dtype": "int64"}, {"name": "post_score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 36315869, "num_examples": 27921}], "download_size": 18528856, "dataset_size": 36315869}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["not-for-all-audiences"]}
|
2023-09-22T13:39:33+00:00
|
[] |
[] |
TAGS
#license-cc0-1.0 #not-for-all-audiences #region-us
|
# Dataset Card for "DirtyWritingPrompts"
Data collected from r/DirtyWritingPrompts, up to 12-2022, from PushShift.
|
[
"# Dataset Card for \"DirtyWritingPrompts\"\n\nData collected from r/DirtyWritingPrompts, up to 12-2022, from PushShift."
] |
[
"TAGS\n#license-cc0-1.0 #not-for-all-audiences #region-us \n",
"# Dataset Card for \"DirtyWritingPrompts\"\n\nData collected from r/DirtyWritingPrompts, up to 12-2022, from PushShift."
] |
[
23,
42
] |
[
"passage: TAGS\n#license-cc0-1.0 #not-for-all-audiences #region-us \n# Dataset Card for \"DirtyWritingPrompts\"\n\nData collected from r/DirtyWritingPrompts, up to 12-2022, from PushShift."
] |
e57bfd9ce9296776c8c2fb02dfe47908149a1072
|
# Dataset Card for "PIPPA-lmgym"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/PIPPA-lmgym
|
[
"region:us"
] |
2023-09-21T19:13:58+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32569932093, "num_examples": 398603}], "download_size": 443538444, "dataset_size": 32569932093}}
|
2023-09-21T21:06:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "PIPPA-lmgym"
More Information needed
|
[
"# Dataset Card for \"PIPPA-lmgym\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"PIPPA-lmgym\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"PIPPA-lmgym\"\n\nMore Information needed"
] |
217cbac5d8e72703e0aa1f3a807c18cf49a7faf6
|
# Dataset Card for "formal-logic-simple-order-simple-objects-blivergent-500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pccl-org/formal-logic-simple-order-simple-objects-blivergent-500
|
[
"region:us"
] |
2023-09-21T19:14:22+00:00
|
{"dataset_info": {"features": [{"name": "greater_than", "dtype": "string"}, {"name": "less_than", "dtype": "string"}, {"name": "correct_example", "sequence": "string"}, {"name": "incorrect_example", "sequence": "string"}, {"name": "distance", "dtype": "int64"}, {"name": "index", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 19635650, "num_examples": 124750}], "download_size": 3888871, "dataset_size": 19635650}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T19:20:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "formal-logic-simple-order-simple-objects-blivergent-500"
More Information needed
|
[
"# Dataset Card for \"formal-logic-simple-order-simple-objects-blivergent-500\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"formal-logic-simple-order-simple-objects-blivergent-500\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"formal-logic-simple-order-simple-objects-blivergent-500\"\n\nMore Information needed"
] |
360c29fd5af34fc814f935f32d710af804c03a7e
|
# Dataset Card for "formal-logic-simple-order-new-objects-bigger-500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pccl-org/formal-logic-simple-order-new-objects-bigger-500
|
[
"region:us"
] |
2023-09-21T19:14:43+00:00
|
{"dataset_info": {"features": [{"name": "greater_than", "dtype": "string"}, {"name": "less_than", "dtype": "string"}, {"name": "correct_example", "sequence": "string"}, {"name": "incorrect_example", "sequence": "string"}, {"name": "distance", "dtype": "int64"}, {"name": "index", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 17349731, "num_examples": 124750}], "download_size": 0, "dataset_size": 17349731}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T18:22:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "formal-logic-simple-order-new-objects-bigger-500"
More Information needed
|
[
"# Dataset Card for \"formal-logic-simple-order-new-objects-bigger-500\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"formal-logic-simple-order-new-objects-bigger-500\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"formal-logic-simple-order-new-objects-bigger-500\"\n\nMore Information needed"
] |
8cbf8ee6cac7a34425256235a6970ee7427e31ee
|
# Dataset Card for "oa_stackexchange_200k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/oa_stackexchange_200k
|
[
"region:us"
] |
2023-09-21T19:19:45+00:00
|
{"dataset_info": {"features": [{"name": "INSTRUCTION", "dtype": "string"}, {"name": "RESPONSE", "dtype": "string"}, {"name": "SOURCE", "dtype": "string"}, {"name": "METADATA", "struct": [{"name": "answer_score", "dtype": "int64"}, {"name": "question_score", "dtype": "int64"}, {"name": "tags", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 206910529.02007446, "num_examples": 200000}], "download_size": 123745965, "dataset_size": 206910529.02007446}}
|
2023-09-21T19:20:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "oa_stackexchange_200k"
More Information needed
|
[
"# Dataset Card for \"oa_stackexchange_200k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"oa_stackexchange_200k\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"oa_stackexchange_200k\"\n\nMore Information needed"
] |
78966e416b462a7bd8737cc34309261f86c63bf6
|
# Dataset Card for "textbook_quality"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vikp/textbook_quality
|
[
"region:us"
] |
2023-09-21T19:20:18+00:00
|
{"dataset_info": {"features": [{"name": "topic", "dtype": "string"}, {"name": "outline", "sequence": "string"}, {"name": "concepts", "sequence": "string"}, {"name": "markdown", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1813817, "num_examples": 64}], "download_size": 719704, "dataset_size": 1813817}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T02:03:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "textbook_quality"
More Information needed
|
[
"# Dataset Card for \"textbook_quality\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"textbook_quality\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"textbook_quality\"\n\nMore Information needed"
] |
faab1f66945670eb079b539d089cbb2b0aaf0649
|
# Dataset Card for "scale_helpful_no_math_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/scale_helpful_no_math
|
[
"region:us"
] |
2023-09-21T19:33:05+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train_rm", "num_bytes": 103718424, "num_examples": 17095}, {"name": "train", "num_bytes": 103718424, "num_examples": 17095}], "download_size": 116368522, "dataset_size": 207436848}}
|
2023-09-25T16:19:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "scale_helpful_no_math_raw"
More Information needed
|
[
"# Dataset Card for \"scale_helpful_no_math_raw\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"scale_helpful_no_math_raw\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"scale_helpful_no_math_raw\"\n\nMore Information needed"
] |
95c2b11d8b8c210d69b505ee93f22fb34bbf825c
|
# Dataset Card for "limerick-topic-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Yorth/limerick-topic-train
|
[
"region:us"
] |
2023-09-21T19:34:14+00:00
|
{"dataset_info": {"features": [{"name": "combined", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 16886056, "num_examples": 52708}], "download_size": 8170276, "dataset_size": 16886056}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-21T19:34:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "limerick-topic-train"
More Information needed
|
[
"# Dataset Card for \"limerick-topic-train\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"limerick-topic-train\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"limerick-topic-train\"\n\nMore Information needed"
] |
4d7f9ed46eaf119fdf2597f8ad9d5dd89fc7f033
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
arianhosseini/gsm_preference_v1
|
[
"region:us"
] |
2023-09-21T19:48:24+00:00
|
{"configs": [{"config_name": "balanced", "data_files": [{"split": "train", "path": "preference_data_balanced.jsonl.train"}, {"split": "valid", "path": "preference_data_balanced.jsonl.valid"}]}, {"config_name": "unbalanced", "data_files": [{"split": "train", "path": "preference_data_unbalanced.jsonl.train"}, {"split": "valid", "path": "preference_data_unbalanced.jsonl.valid"}]}]}
|
2023-09-21T23:33:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
15f5dc0ac2bea932423a7dc865a80e64d1b41258
|
# Learning layouts in Path of Exile with Vision Transformers: A proof of concept
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/650c55bc9169ea73315b6c22/RJ-rTPWwOFUZlA3ydqhZ2.mp4"></video>
Where's the exit? This question often crosses the minds of both newcomers and seasoned players alike. The key lies in understanding the game's layouts, especially during the campaign when taking a wrong turn can significantly slow you down. Our project aims to solve this challenge through machine learning.
We've developed a proof-of-concept for learning layouts in Path of Exile using Vision Transformers. We trained a Vision Transformer to predict the direction of the exit in the A3 Marketplace, relying solely on a video of the minimap. You can see the model in action in the video above: the red arrow indicates the predicted exit direction, while the green arrow shows the actual direction.
Project page: https://github.com/kweimann/poe-learning-layouts
|
kweimann/poe-learning-layouts
|
[
"license:mit",
"region:us"
] |
2023-09-21T19:49:13+00:00
|
{"license": "mit"}
|
2023-09-23T11:10:21+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
# Learning layouts in Path of Exile with Vision Transformers: A proof of concept
<video controls autoplay src="URL
Where's the exit? This question often crosses the minds of both newcomers and seasoned players alike. The key lies in understanding the game's layouts, especially during the campaign when taking a wrong turn can significantly slow you down. Our project aims to solve this challenge through machine learning.
We've developed a proof-of-concept for learning layouts in Path of Exile using Vision Transformers. We trained a Vision Transformer to predict the direction of the exit in the A3 Marketplace, relying solely on a video of the minimap. You can see the model in action in the video above: the red arrow indicates the predicted exit direction, while the green arrow shows the actual direction.
Project page: URL
|
[
"# Learning layouts in Path of Exile with Vision Transformers: A proof of concept\n\n<video controls autoplay src=\"URL\n\nWhere's the exit? This question often crosses the minds of both newcomers and seasoned players alike. The key lies in understanding the game's layouts, especially during the campaign when taking a wrong turn can significantly slow you down. Our project aims to solve this challenge through machine learning.\n\nWe've developed a proof-of-concept for learning layouts in Path of Exile using Vision Transformers. We trained a Vision Transformer to predict the direction of the exit in the A3 Marketplace, relying solely on a video of the minimap. You can see the model in action in the video above: the red arrow indicates the predicted exit direction, while the green arrow shows the actual direction.\n\nProject page: URL"
] |
[
"TAGS\n#license-mit #region-us \n",
"# Learning layouts in Path of Exile with Vision Transformers: A proof of concept\n\n<video controls autoplay src=\"URL\n\nWhere's the exit? This question often crosses the minds of both newcomers and seasoned players alike. The key lies in understanding the game's layouts, especially during the campaign when taking a wrong turn can significantly slow you down. Our project aims to solve this challenge through machine learning.\n\nWe've developed a proof-of-concept for learning layouts in Path of Exile using Vision Transformers. We trained a Vision Transformer to predict the direction of the exit in the A3 Marketplace, relying solely on a video of the minimap. You can see the model in action in the video above: the red arrow indicates the predicted exit direction, while the green arrow shows the actual direction.\n\nProject page: URL"
] |
[
11,
194
] |
[
"passage: TAGS\n#license-mit #region-us \n# Learning layouts in Path of Exile with Vision Transformers: A proof of concept\n\n<video controls autoplay src=\"URL\n\nWhere's the exit? This question often crosses the minds of both newcomers and seasoned players alike. The key lies in understanding the game's layouts, especially during the campaign when taking a wrong turn can significantly slow you down. Our project aims to solve this challenge through machine learning.\n\nWe've developed a proof-of-concept for learning layouts in Path of Exile using Vision Transformers. We trained a Vision Transformer to predict the direction of the exit in the A3 Marketplace, relying solely on a video of the minimap. You can see the model in action in the video above: the red arrow indicates the predicted exit direction, while the green arrow shows the actual direction.\n\nProject page: URL"
] |
d6c451ebaefe7e2e7292edbf10470386e12fb93f
|
# Dataset Card for "law_stackexchange"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/law_stackexchange
|
[
"region:us"
] |
2023-09-21T19:56:29+00:00
|
{"dataset_info": {"features": [{"name": "question_id", "dtype": "int64"}, {"name": "tags", "sequence": "string"}, {"name": "score", "dtype": "int64"}, {"name": "license", "dtype": "string"}, {"name": "link", "dtype": "string"}, {"name": "question_title", "dtype": "string"}, {"name": "question_body", "dtype": "string"}, {"name": "answers", "list": [{"name": "answer_id", "dtype": "int64"}, {"name": "body", "dtype": "string"}, {"name": "score", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 95966652, "num_examples": 24370}], "download_size": 53517367, "dataset_size": 95966652}}
|
2023-09-21T19:56:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "law_stackexchange"
More Information needed
|
[
"# Dataset Card for \"law_stackexchange\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"law_stackexchange\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"law_stackexchange\"\n\nMore Information needed"
] |
e554dc5fe3b60eae64b5c7ca0e82cb2d76a1bc6a
|
# Dataset Card for Evaluation run of Lazycuber/L2-7b-Base-Guanaco-Uncensored
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Lazycuber/L2-7b-Base-Guanaco-Uncensored
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Lazycuber/L2-7b-Base-Guanaco-Uncensored](https://huggingface.co/Lazycuber/L2-7b-Base-Guanaco-Uncensored) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Lazycuber__L2-7b-Base-Guanaco-Uncensored",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T18:03:37.956652](https://huggingface.co/datasets/open-llm-leaderboard/details_Lazycuber__L2-7b-Base-Guanaco-Uncensored/blob/main/results_2023-10-25T18-03-37.956652.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0010486577181208054,
"em_stderr": 0.0003314581465219032,
"f1": 0.05746119966442964,
"f1_stderr": 0.0013225129443672397,
"acc": 0.4089247492629428,
"acc_stderr": 0.009702205865271943
},
"harness|drop|3": {
"em": 0.0010486577181208054,
"em_stderr": 0.0003314581465219032,
"f1": 0.05746119966442964,
"f1_stderr": 0.0013225129443672397
},
"harness|gsm8k|5": {
"acc": 0.07278241091736164,
"acc_stderr": 0.007155604761167465
},
"harness|winogrande|5": {
"acc": 0.745067087608524,
"acc_stderr": 0.012248806969376422
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Lazycuber__L2-7b-Base-Guanaco-Uncensored
|
[
"region:us"
] |
2023-09-21T19:59:01+00:00
|
{"pretty_name": "Evaluation run of Lazycuber/L2-7b-Base-Guanaco-Uncensored", "dataset_summary": "Dataset automatically created during the evaluation run of model [Lazycuber/L2-7b-Base-Guanaco-Uncensored](https://huggingface.co/Lazycuber/L2-7b-Base-Guanaco-Uncensored) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Lazycuber__L2-7b-Base-Guanaco-Uncensored\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T18:03:37.956652](https://huggingface.co/datasets/open-llm-leaderboard/details_Lazycuber__L2-7b-Base-Guanaco-Uncensored/blob/main/results_2023-10-25T18-03-37.956652.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0010486577181208054,\n \"em_stderr\": 0.0003314581465219032,\n \"f1\": 0.05746119966442964,\n \"f1_stderr\": 0.0013225129443672397,\n \"acc\": 0.4089247492629428,\n \"acc_stderr\": 0.009702205865271943\n },\n \"harness|drop|3\": {\n \"em\": 0.0010486577181208054,\n \"em_stderr\": 0.0003314581465219032,\n \"f1\": 0.05746119966442964,\n \"f1_stderr\": 0.0013225129443672397\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07278241091736164,\n \"acc_stderr\": 0.007155604761167465\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.745067087608524,\n \"acc_stderr\": 0.012248806969376422\n }\n}\n```", "repo_url": "https://huggingface.co/Lazycuber/L2-7b-Base-Guanaco-Uncensored", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|arc:challenge|25_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T18_03_37.956652", "path": ["**/details_harness|drop|3_2023-10-25T18-03-37.956652.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T18-03-37.956652.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T18_03_37.956652", "path": ["**/details_harness|gsm8k|5_2023-10-25T18-03-37.956652.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T18-03-37.956652.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hellaswag|10_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-21T20-58-37.445412.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-21T20-58-37.445412.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-21T20-58-37.445412.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T18_03_37.956652", "path": ["**/details_harness|winogrande|5_2023-10-25T18-03-37.956652.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T18-03-37.956652.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_21T20_58_37.445412", "path": ["results_2023-09-21T20-58-37.445412.parquet"]}, {"split": "2023_10_25T18_03_37.956652", "path": ["results_2023-10-25T18-03-37.956652.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T18-03-37.956652.parquet"]}]}]}
|
2023-10-25T17:03:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Lazycuber/L2-7b-Base-Guanaco-Uncensored
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Lazycuber/L2-7b-Base-Guanaco-Uncensored on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T18:03:37.956652(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Lazycuber/L2-7b-Base-Guanaco-Uncensored",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Lazycuber/L2-7b-Base-Guanaco-Uncensored on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T18:03:37.956652(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Lazycuber/L2-7b-Base-Guanaco-Uncensored",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Lazycuber/L2-7b-Base-Guanaco-Uncensored on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T18:03:37.956652(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
29,
31,
177,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Lazycuber/L2-7b-Base-Guanaco-Uncensored## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Lazycuber/L2-7b-Base-Guanaco-Uncensored on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T18:03:37.956652(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
3f296b449506b3017e16289011009e701efb7784
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
orderofmagnitude/coT
|
[
"region:us"
] |
2023-09-21T19:59:04+00:00
|
{}
|
2023-09-21T20:30:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
6fd43bed8b8b288b2825937cb7fd3931cf14d802
|
# Dataset Card for "law_stackexchange_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/law_stackexchange_prompts
|
[
"region:us"
] |
2023-09-21T19:59:57+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "solution", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 64447591, "num_examples": 24343}], "download_size": 38111723, "dataset_size": 64447591}}
|
2023-09-21T20:00:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "law_stackexchange_prompts"
More Information needed
|
[
"# Dataset Card for \"law_stackexchange_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"law_stackexchange_prompts\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"law_stackexchange_prompts\"\n\nMore Information needed"
] |
6ba7688130953b828c7a986de1f27ac53018d705
|
# Dataset Card for sales-textbook_for_convincing_and_selling
A textbook create for the purpose of training a sales chatbot.
Inspiration come from: Textbooks is all you need https://arxiv.org/abs/2306.11644
The data was generated by gpt-3.5-turbo
#Structure
A simpel textbook that has subheadlines and headlines.
Chapters and Subheadlines are mentioned in the dataset. Look at the first two examples.
# Data Generation
The following code was used for the text generation:
#include github link
Out of the textbook conversation examples were generated
https://huggingface.co/datasets/goendalf666/sales-conversations
Here is the prompt that was used for the data generation.
For the exact data generation code look up the following repo:
#a structure with headlines and subheadlines was generated before https://github.com/tom813/salesGPT_foundation/blob/main/data_generation/textbook_and_conversation_gen.py
```
prompt = f"""
I want to write a book about sales and convincing techniques. Here is the outline of the chapters:
1. Building Rapport and Capturing Attention
2. Developing Exceptional Communication Skills
3. Discovering Customer Needs and Pain Points
4. Presenting Solutions and Benefits
5. Overcoming Resistance and Objections
6. Closing the Sale
Here is the outline of the current chapter that:
{headline}
Write me a long and detailed text for the subpoint: {subheadline} of the current chapter and only write a text for this subpoint.
Ignore points like body language or tone of voice. Focus on the
Start by mentioning the Chapter and the subpoint.
The overall aim is to write a textbook.
to teach someone with less experience how to convince people and sell stuff.
"""
```
|
goendalf666/sales-textbook_for_convincing_and_selling
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"sales",
"arxiv:2306.11644",
"region:us"
] |
2023-09-21T20:14:53+00:00
|
{"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "tags": ["sales"]}
|
2023-10-04T19:38:03+00:00
|
[
"2306.11644"
] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-English #sales #arxiv-2306.11644 #region-us
|
# Dataset Card for sales-textbook_for_convincing_and_selling
A textbook create for the purpose of training a sales chatbot.
Inspiration come from: Textbooks is all you need URL
The data was generated by gpt-3.5-turbo
#Structure
A simpel textbook that has subheadlines and headlines.
Chapters and Subheadlines are mentioned in the dataset. Look at the first two examples.
# Data Generation
The following code was used for the text generation:
#include github link
Out of the textbook conversation examples were generated
URL
Here is the prompt that was used for the data generation.
For the exact data generation code look up the following repo:
#a structure with headlines and subheadlines was generated before URL
|
[
"# Dataset Card for sales-textbook_for_convincing_and_selling\nA textbook create for the purpose of training a sales chatbot.\n\nInspiration come from: Textbooks is all you need URL\n\nThe data was generated by gpt-3.5-turbo",
"# Data Generation\nThe following code was used for the text generation:"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #sales #arxiv-2306.11644 #region-us \n",
"# Dataset Card for sales-textbook_for_convincing_and_selling\nA textbook create for the purpose of training a sales chatbot.\n\nInspiration come from: Textbooks is all you need URL\n\nThe data was generated by gpt-3.5-turbo",
"# Data Generation\nThe following code was used for the text generation:"
] |
[
44,
58,
13
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #sales #arxiv-2306.11644 #region-us \n# Dataset Card for sales-textbook_for_convincing_and_selling\nA textbook create for the purpose of training a sales chatbot.\n\nInspiration come from: Textbooks is all you need URL\n\nThe data was generated by gpt-3.5-turbo# Data Generation\nThe following code was used for the text generation:"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.