sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a63476b7abca5b64df00156cf81a02da2afcd540
|
# Dataset Card for Evaluation run of GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k](https://huggingface.co/GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_GeorgiaTechResearchInstitute__galactica-6.7b-evol-instruct-70k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T18:49:08.232237](https://huggingface.co/datasets/open-llm-leaderboard/details_GeorgiaTechResearchInstitute__galactica-6.7b-evol-instruct-70k/blob/main/results_2023-09-22T18-49-08.232237.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.03838087248322147,
"em_stderr": 0.0019674269651511014,
"f1": 0.12162856543624175,
"f1_stderr": 0.0024752721568615517,
"acc": 0.28326869809409316,
"acc_stderr": 0.00781704702542305
},
"harness|drop|3": {
"em": 0.03838087248322147,
"em_stderr": 0.0019674269651511014,
"f1": 0.12162856543624175,
"f1_stderr": 0.0024752721568615517
},
"harness|gsm8k|5": {
"acc": 0.0037907505686125853,
"acc_stderr": 0.0016927007401501821
},
"harness|winogrande|5": {
"acc": 0.5627466456195738,
"acc_stderr": 0.013941393310695918
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_GeorgiaTechResearchInstitute__galactica-6.7b-evol-instruct-70k
|
[
"region:us"
] |
2023-09-22T17:49:12+00:00
|
{"pretty_name": "Evaluation run of GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k", "dataset_summary": "Dataset automatically created during the evaluation run of model [GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k](https://huggingface.co/GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_GeorgiaTechResearchInstitute__galactica-6.7b-evol-instruct-70k\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-22T18:49:08.232237](https://huggingface.co/datasets/open-llm-leaderboard/details_GeorgiaTechResearchInstitute__galactica-6.7b-evol-instruct-70k/blob/main/results_2023-09-22T18-49-08.232237.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.03838087248322147,\n \"em_stderr\": 0.0019674269651511014,\n \"f1\": 0.12162856543624175,\n \"f1_stderr\": 0.0024752721568615517,\n \"acc\": 0.28326869809409316,\n \"acc_stderr\": 0.00781704702542305\n },\n \"harness|drop|3\": {\n \"em\": 0.03838087248322147,\n \"em_stderr\": 0.0019674269651511014,\n \"f1\": 0.12162856543624175,\n \"f1_stderr\": 0.0024752721568615517\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0037907505686125853,\n \"acc_stderr\": 0.0016927007401501821\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5627466456195738,\n \"acc_stderr\": 0.013941393310695918\n }\n}\n```", "repo_url": "https://huggingface.co/GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_22T18_49_08.232237", "path": ["**/details_harness|drop|3_2023-09-22T18-49-08.232237.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-22T18-49-08.232237.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_22T18_49_08.232237", "path": ["**/details_harness|gsm8k|5_2023-09-22T18-49-08.232237.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-22T18-49-08.232237.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_22T18_49_08.232237", "path": ["**/details_harness|winogrande|5_2023-09-22T18-49-08.232237.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-22T18-49-08.232237.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T18_49_08.232237", "path": ["results_2023-09-22T18-49-08.232237.parquet"]}, {"split": "latest", "path": ["results_2023-09-22T18-49-08.232237.parquet"]}]}]}
|
2023-09-22T17:49:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-22T18:49:08.232237(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T18:49:08.232237(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T18:49:08.232237(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
30,
31,
178,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-22T18:49:08.232237(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
c66d1b8b6752a620bcf72710fc5258609df7889f
|
# Dataset Card for "vedic-sanskrit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shunyasea/vedic-sanskrit
|
[
"region:us"
] |
2023-09-22T17:55:30+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 60638909, "num_examples": 536641}, {"name": "test", "num_bytes": 6759017, "num_examples": 59627}], "download_size": 28757388, "dataset_size": 67397926}}
|
2023-09-22T18:00:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "vedic-sanskrit"
More Information needed
|
[
"# Dataset Card for \"vedic-sanskrit\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"vedic-sanskrit\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"vedic-sanskrit\"\n\nMore Information needed"
] |
f957d6f8c9f246ec0699a2a3bb54443141fb26a2
|
# Dataset Card for "cool_new_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
NyxSlee/cool_new_dataset
|
[
"region:us"
] |
2023-09-22T18:02:36+00:00
|
{"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "price", "dtype": "float64"}, {"name": "color", "dtype": "string"}, {"name": "size", "sequence": "string"}, {"name": "ad", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5020, "num_examples": 5}], "download_size": 11617, "dataset_size": 5020}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T18:02:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cool_new_dataset"
More Information needed
|
[
"# Dataset Card for \"cool_new_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cool_new_dataset\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cool_new_dataset\"\n\nMore Information needed"
] |
d16b15fdb36320da159bc3fdbb4c36c48975c1b3
|
# Dataset Card for "my_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
axelprsvl/my_dataset
|
[
"region:us"
] |
2023-09-22T18:03:14+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 40520175.0, "num_examples": 5}], "download_size": 40474142, "dataset_size": 40520175.0}}
|
2023-09-22T18:03:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "my_dataset"
More Information needed
|
[
"# Dataset Card for \"my_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"my_dataset\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"my_dataset\"\n\nMore Information needed"
] |
5cc3b1d446f56d5a6ce556f5e4c0d0d543a4673c
|
# Sonar
The [Sonar dataset](https://archive-beta.ics.uci.edu/dataset/151/connectionist+bench+sonar+mines+vs+rocks) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Dataset to discriminate between sonar signals bounced off a metal cylinder and those bounced off a roughly cylindrical rock.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| sonar | Binary classification | Is the sonar detecting a rock? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/sonar")["train"]
```
|
asoria/sonar
|
[
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"adult",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] |
2023-09-22T18:09:50+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "task_categories": ["tabular-classification"], "pretty_name": "Sonar", "configs": [{"config_name": "sonar"}], "tags": ["adult", "tabular_classification", "binary_classification", "UCI"]}
|
2023-09-22T18:14:37+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-tabular-classification #size_categories-n<1K #language-English #license-cc #adult #tabular_classification #binary_classification #UCI #region-us
|
Sonar
=====
The Sonar dataset from the UCI ML repository.
Dataset to discriminate between sonar signals bounced off a metal cylinder and those bounced off a roughly cylindrical rock.
Configurations and tasks
========================
Configuration: sonar, Task: Binary classification, Description: Is the sonar detecting a rock?
Usage
=====
|
[] |
[
"TAGS\n#task_categories-tabular-classification #size_categories-n<1K #language-English #license-cc #adult #tabular_classification #binary_classification #UCI #region-us \n"
] |
[
53
] |
[
"passage: TAGS\n#task_categories-tabular-classification #size_categories-n<1K #language-English #license-cc #adult #tabular_classification #binary_classification #UCI #region-us \n"
] |
b6cce8517ee70f3174526dff5d7bc303bd813eb3
|
# Dataset Card for "llmjp2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mmuttharasan/llmjp2
|
[
"region:us"
] |
2023-09-22T18:14:43+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 592043, "num_examples": 1}], "download_size": 0, "dataset_size": 592043}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T18:19:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llmjp2"
More Information needed
|
[
"# Dataset Card for \"llmjp2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llmjp2\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llmjp2\"\n\nMore Information needed"
] |
580c20746ad30057e6f4be8b386167c30b43f5e3
|
# Dataset Card for "llmjptk4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mmuttharasan/llmjptk4
|
[
"region:us"
] |
2023-09-22T18:20:00+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 109096.0, "num_examples": 26}, {"name": "test", "num_bytes": 109096.0, "num_examples": 26}], "download_size": 48848, "dataset_size": 218192.0}}
|
2023-09-22T18:20:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llmjptk4"
More Information needed
|
[
"# Dataset Card for \"llmjptk4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llmjptk4\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llmjptk4\"\n\nMore Information needed"
] |
e2e7a840e44509749e65b2b6d9c1e1555094456a
|
# Dataset Card for "vedic-sanskrit-sources"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shunyasea/vedic-sanskrit-sources
|
[
"region:us"
] |
2023-09-22T18:53:49+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "sequence": "string"}, {"name": "metadata", "dtype": "string"}, {"name": "sources", "dtype": "string"}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 24224616, "num_examples": 18551}, {"name": "test", "num_bytes": 2559357, "num_examples": 2062}], "download_size": 11373896, "dataset_size": 26783973}}
|
2023-09-25T01:24:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "vedic-sanskrit-sources"
More Information needed
|
[
"# Dataset Card for \"vedic-sanskrit-sources\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"vedic-sanskrit-sources\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"vedic-sanskrit-sources\"\n\nMore Information needed"
] |
e23d9b4a8b4fed01e7d49855f53a4fa3bbce6eb0
|
# Dataset Card for Evaluation run of zarakiquemparte/zarafusionix-l2-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/zarakiquemparte/zarafusionix-l2-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [zarakiquemparte/zarafusionix-l2-7b](https://huggingface.co/zarakiquemparte/zarafusionix-l2-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_zarakiquemparte__zarafusionix-l2-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T19:56:11.100071](https://huggingface.co/datasets/open-llm-leaderboard/details_zarakiquemparte__zarafusionix-l2-7b/blob/main/results_2023-09-22T19-56-11.100071.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.20669043624161074,
"em_stderr": 0.004146877317311672,
"f1": 0.29368812919463155,
"f1_stderr": 0.004195906469994281,
"acc": 0.40933494018871774,
"acc_stderr": 0.009672451208885371
},
"harness|drop|3": {
"em": 0.20669043624161074,
"em_stderr": 0.004146877317311672,
"f1": 0.29368812919463155,
"f1_stderr": 0.004195906469994281
},
"harness|gsm8k|5": {
"acc": 0.07202426080363912,
"acc_stderr": 0.007121147983537124
},
"harness|winogrande|5": {
"acc": 0.7466456195737964,
"acc_stderr": 0.012223754434233618
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_zarakiquemparte__zarafusionix-l2-7b
|
[
"region:us"
] |
2023-09-22T18:56:15+00:00
|
{"pretty_name": "Evaluation run of zarakiquemparte/zarafusionix-l2-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [zarakiquemparte/zarafusionix-l2-7b](https://huggingface.co/zarakiquemparte/zarafusionix-l2-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_zarakiquemparte__zarafusionix-l2-7b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-22T19:56:11.100071](https://huggingface.co/datasets/open-llm-leaderboard/details_zarakiquemparte__zarafusionix-l2-7b/blob/main/results_2023-09-22T19-56-11.100071.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.20669043624161074,\n \"em_stderr\": 0.004146877317311672,\n \"f1\": 0.29368812919463155,\n \"f1_stderr\": 0.004195906469994281,\n \"acc\": 0.40933494018871774,\n \"acc_stderr\": 0.009672451208885371\n },\n \"harness|drop|3\": {\n \"em\": 0.20669043624161074,\n \"em_stderr\": 0.004146877317311672,\n \"f1\": 0.29368812919463155,\n \"f1_stderr\": 0.004195906469994281\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07202426080363912,\n \"acc_stderr\": 0.007121147983537124\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7466456195737964,\n \"acc_stderr\": 0.012223754434233618\n }\n}\n```", "repo_url": "https://huggingface.co/zarakiquemparte/zarafusionix-l2-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_22T19_56_11.100071", "path": ["**/details_harness|drop|3_2023-09-22T19-56-11.100071.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-22T19-56-11.100071.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_22T19_56_11.100071", "path": ["**/details_harness|gsm8k|5_2023-09-22T19-56-11.100071.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-22T19-56-11.100071.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_22T19_56_11.100071", "path": ["**/details_harness|winogrande|5_2023-09-22T19-56-11.100071.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-22T19-56-11.100071.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T19_56_11.100071", "path": ["results_2023-09-22T19-56-11.100071.parquet"]}, {"split": "latest", "path": ["results_2023-09-22T19-56-11.100071.parquet"]}]}]}
|
2023-09-22T18:56:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of zarakiquemparte/zarafusionix-l2-7b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model zarakiquemparte/zarafusionix-l2-7b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-22T19:56:11.100071(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of zarakiquemparte/zarafusionix-l2-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model zarakiquemparte/zarafusionix-l2-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T19:56:11.100071(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of zarakiquemparte/zarafusionix-l2-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model zarakiquemparte/zarafusionix-l2-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T19:56:11.100071(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of zarakiquemparte/zarafusionix-l2-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model zarakiquemparte/zarafusionix-l2-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-22T19:56:11.100071(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
83835c87346de32ac9223bdce5264e69ef3366ad
|
# Dataset Card for "invoices-and-receipts_ocr_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mychen76/invoices-and-receipts_ocr_v1
|
[
"region:us"
] |
2023-09-22T19:06:04+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "id", "dtype": "string"}, {"name": "parsed_data", "dtype": "string"}, {"name": "raw_data", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 465061949.289, "num_examples": 2043}, {"name": "test", "num_bytes": 23808463.0, "num_examples": 125}, {"name": "valid", "num_bytes": 22325731.0, "num_examples": 70}], "download_size": 281665599, "dataset_size": 511196143.289}}
|
2023-09-22T19:07:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "invoices-and-receipts_ocr_v1"
More Information needed
|
[
"# Dataset Card for \"invoices-and-receipts_ocr_v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"invoices-and-receipts_ocr_v1\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"invoices-and-receipts_ocr_v1\"\n\nMore Information needed"
] |
038de06bbd98cf10f62aa13ebf5b35ab64ad4d31
|
# Dataset Card for Evaluation run of Azure99/blossom-v1-3b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Azure99/blossom-v1-3b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Azure99/blossom-v1-3b](https://huggingface.co/Azure99/blossom-v1-3b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Azure99__blossom-v1-3b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T20:19:06.674002](https://huggingface.co/datasets/open-llm-leaderboard/details_Azure99__blossom-v1-3b/blob/main/results_2023-09-22T20-19-06.674002.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.035968959731543626,
"em_stderr": 0.0019069930004768894,
"f1": 0.08654886744966468,
"f1_stderr": 0.002229945283926482,
"acc": 0.2962915868075896,
"acc_stderr": 0.007760914549413539
},
"harness|drop|3": {
"em": 0.035968959731543626,
"em_stderr": 0.0019069930004768894,
"f1": 0.08654886744966468,
"f1_stderr": 0.002229945283926482
},
"harness|gsm8k|5": {
"acc": 0.0037907505686125853,
"acc_stderr": 0.0016927007401502012
},
"harness|winogrande|5": {
"acc": 0.5887924230465666,
"acc_stderr": 0.013829128358676876
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Azure99__blossom-v1-3b
|
[
"region:us"
] |
2023-09-22T19:19:10+00:00
|
{"pretty_name": "Evaluation run of Azure99/blossom-v1-3b", "dataset_summary": "Dataset automatically created during the evaluation run of model [Azure99/blossom-v1-3b](https://huggingface.co/Azure99/blossom-v1-3b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Azure99__blossom-v1-3b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-22T20:19:06.674002](https://huggingface.co/datasets/open-llm-leaderboard/details_Azure99__blossom-v1-3b/blob/main/results_2023-09-22T20-19-06.674002.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.035968959731543626,\n \"em_stderr\": 0.0019069930004768894,\n \"f1\": 0.08654886744966468,\n \"f1_stderr\": 0.002229945283926482,\n \"acc\": 0.2962915868075896,\n \"acc_stderr\": 0.007760914549413539\n },\n \"harness|drop|3\": {\n \"em\": 0.035968959731543626,\n \"em_stderr\": 0.0019069930004768894,\n \"f1\": 0.08654886744966468,\n \"f1_stderr\": 0.002229945283926482\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0037907505686125853,\n \"acc_stderr\": 0.0016927007401502012\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5887924230465666,\n \"acc_stderr\": 0.013829128358676876\n }\n}\n```", "repo_url": "https://huggingface.co/Azure99/blossom-v1-3b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_22T20_19_06.674002", "path": ["**/details_harness|drop|3_2023-09-22T20-19-06.674002.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-22T20-19-06.674002.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_22T20_19_06.674002", "path": ["**/details_harness|gsm8k|5_2023-09-22T20-19-06.674002.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-22T20-19-06.674002.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_22T20_19_06.674002", "path": ["**/details_harness|winogrande|5_2023-09-22T20-19-06.674002.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-22T20-19-06.674002.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T20_19_06.674002", "path": ["results_2023-09-22T20-19-06.674002.parquet"]}, {"split": "latest", "path": ["results_2023-09-22T20-19-06.674002.parquet"]}]}]}
|
2023-09-22T19:19:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Azure99/blossom-v1-3b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Azure99/blossom-v1-3b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-22T20:19:06.674002(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Azure99/blossom-v1-3b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v1-3b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T20:19:06.674002(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Azure99/blossom-v1-3b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v1-3b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T20:19:06.674002(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
19,
31,
167,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Azure99/blossom-v1-3b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v1-3b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-22T20:19:06.674002(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
5bc1b249a3cb7eda1491fb89ee5abaffdbdbbea0
|
# Dataset Card for "837a21b8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/837a21b8
|
[
"region:us"
] |
2023-09-22T19:38:56+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 168, "num_examples": 10}], "download_size": 1307, "dataset_size": 168}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T19:38:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "837a21b8"
More Information needed
|
[
"# Dataset Card for \"837a21b8\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"837a21b8\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"837a21b8\"\n\nMore Information needed"
] |
8c5e8dcc3feb5f157111abddf7eaf1952d1f0b1e
|
# Dataset Card for "a95a2c5b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/a95a2c5b
|
[
"region:us"
] |
2023-09-22T19:38:59+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 168, "num_examples": 10}], "download_size": 1307, "dataset_size": 168}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T19:38:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "a95a2c5b"
More Information needed
|
[
"# Dataset Card for \"a95a2c5b\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"a95a2c5b\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"a95a2c5b\"\n\nMore Information needed"
] |
9b8eb0c34fdd2c555a3f3b9895faf00eaef13cb6
|
# Dataset Card for "toxic25m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Vaibhav9401/toxic25m
|
[
"region:us"
] |
2023-09-22T19:47:34+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "llama_finetune_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20143312184, "num_examples": 25159680}], "download_size": 3446911922, "dataset_size": 20143312184}}
|
2023-09-23T05:20:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "toxic25m"
More Information needed
|
[
"# Dataset Card for \"toxic25m\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"toxic25m\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"toxic25m\"\n\nMore Information needed"
] |
ead246019df76bc4b8a1de58898d85e27805ef46
|
# Dataset Card for Evaluation run of chavinlo/gpt4-x-alpaca
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/chavinlo/gpt4-x-alpaca
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [chavinlo/gpt4-x-alpaca](https://huggingface.co/chavinlo/gpt4-x-alpaca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_chavinlo__gpt4-x-alpaca",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T20:56:09.987040](https://huggingface.co/datasets/open-llm-leaderboard/details_chavinlo__gpt4-x-alpaca/blob/main/results_2023-09-22T20-56-09.987040.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.15478187919463088,
"em_stderr": 0.003704111989193061,
"f1": 0.24988045302013467,
"f1_stderr": 0.00385619985047934,
"acc": 0.3648545063856345,
"acc_stderr": 0.008703557271933391
},
"harness|drop|3": {
"em": 0.15478187919463088,
"em_stderr": 0.003704111989193061,
"f1": 0.24988045302013467,
"f1_stderr": 0.00385619985047934
},
"harness|gsm8k|5": {
"acc": 0.028051554207733132,
"acc_stderr": 0.004548229533836362
},
"harness|winogrande|5": {
"acc": 0.7016574585635359,
"acc_stderr": 0.012858885010030421
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_chavinlo__gpt4-x-alpaca
|
[
"region:us"
] |
2023-09-22T19:56:13+00:00
|
{"pretty_name": "Evaluation run of chavinlo/gpt4-x-alpaca", "dataset_summary": "Dataset automatically created during the evaluation run of model [chavinlo/gpt4-x-alpaca](https://huggingface.co/chavinlo/gpt4-x-alpaca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chavinlo__gpt4-x-alpaca\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-22T20:56:09.987040](https://huggingface.co/datasets/open-llm-leaderboard/details_chavinlo__gpt4-x-alpaca/blob/main/results_2023-09-22T20-56-09.987040.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.15478187919463088,\n \"em_stderr\": 0.003704111989193061,\n \"f1\": 0.24988045302013467,\n \"f1_stderr\": 0.00385619985047934,\n \"acc\": 0.3648545063856345,\n \"acc_stderr\": 0.008703557271933391\n },\n \"harness|drop|3\": {\n \"em\": 0.15478187919463088,\n \"em_stderr\": 0.003704111989193061,\n \"f1\": 0.24988045302013467,\n \"f1_stderr\": 0.00385619985047934\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.028051554207733132,\n \"acc_stderr\": 0.004548229533836362\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7016574585635359,\n \"acc_stderr\": 0.012858885010030421\n }\n}\n```", "repo_url": "https://huggingface.co/chavinlo/gpt4-x-alpaca", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_22T20_56_09.987040", "path": ["**/details_harness|drop|3_2023-09-22T20-56-09.987040.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-22T20-56-09.987040.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_22T20_56_09.987040", "path": ["**/details_harness|gsm8k|5_2023-09-22T20-56-09.987040.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-22T20-56-09.987040.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_22T20_56_09.987040", "path": ["**/details_harness|winogrande|5_2023-09-22T20-56-09.987040.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-22T20-56-09.987040.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T20_56_09.987040", "path": ["results_2023-09-22T20-56-09.987040.parquet"]}, {"split": "latest", "path": ["results_2023-09-22T20-56-09.987040.parquet"]}]}]}
|
2023-09-22T19:56:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of chavinlo/gpt4-x-alpaca
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model chavinlo/gpt4-x-alpaca on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-22T20:56:09.987040(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of chavinlo/gpt4-x-alpaca",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model chavinlo/gpt4-x-alpaca on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T20:56:09.987040(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of chavinlo/gpt4-x-alpaca",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model chavinlo/gpt4-x-alpaca on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T20:56:09.987040(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
20,
31,
168,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of chavinlo/gpt4-x-alpaca## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model chavinlo/gpt4-x-alpaca on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-22T20:56:09.987040(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
4ef927a18cfede25730e759b370332349af44337
|
# Dataset Card for "ficbook_prompts_best_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/ficbook_prompts_best_10k
|
[
"region:us"
] |
2023-09-22T19:56:20+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "solution_short_llama2", "dtype": "string"}, {"name": "solution_full", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 268346552, "num_examples": 10000}], "download_size": 138937080, "dataset_size": 268346552}}
|
2023-09-25T16:36:47+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ficbook_prompts_best_10k"
More Information needed
|
[
"# Dataset Card for \"ficbook_prompts_best_10k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ficbook_prompts_best_10k\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ficbook_prompts_best_10k\"\n\nMore Information needed"
] |
47b68822aa37d421a5bc3f1518149f0e24e05cad
|
# Dataset Card for "construction_sample_dataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lokesh2002/construction_sample_dataset2
|
[
"region:us"
] |
2023-09-22T20:05:57+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4214015.0, "num_examples": 10}], "download_size": 4162284, "dataset_size": 4214015.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T20:05:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "construction_sample_dataset2"
More Information needed
|
[
"# Dataset Card for \"construction_sample_dataset2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"construction_sample_dataset2\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"construction_sample_dataset2\"\n\nMore Information needed"
] |
ca73d65f591da5b1eef422325d1608609339ea21
|
# Dataset Card for Bee_Specimens
## Dataset Summary
The USNM Bumblebee Dataset is a natural history dataset containing, for each of 73,497 Bumblebee specimens in the family Apidae, a single image in lateral or dorsal view and a tab-separated value file with occurrence data. Occurrence data includes the species classification, the date and site/location of collection, and other metadata conforming to the Darwin Core data standard (https://dwc.tdwg.org). 11,421 specimens are not identified to species and these specimens are included as 'Bombus sp.' or 'Xylocopa sp.' The collecting sites/locations of the majority of specimens (55,301), have been georeferenced. The dataset is worldwide in scope, but is limited to the specimens available in the Smithsonian USNM collection.
## Languages
English
## Data Instances
A typical data point comprises of the specimen metadata and image information for a single bumblebee specimen.
An example from the dataset looks as follows:
```json
{
'occurrenceID': 'http://n2t.net/ark:/65665/30042e2d8-669d-4520-b456-e3c64203eff8',
'catalogNumber': 'USNMENT01732649',
'recordedBy': 'R. Craig',
'year': '1949',
'month': '4',
'day': '13',
'country': 'United States',
'stateProvince': 'California',
'county': 'Fresno',
'locality': 'Auberry',
'decimalLatitude': '37.0808',
'decimalLongitude': '-119.485',
'identifiedBy': "O'Brien, L. R.",
'scientificName': 'Xylocopa (Notoxylocopa) tabaniformis orpifex',
'genus': 'Xylocopa',
'subgenus': 'Notoxylocopa',
'specificEpithet': 'tabaniformis',
'infraspecificEpithet': 'orpifex',
'scientificNameAuthorship': 'Smith',
'accessURI': 'https://ids.si.edu/ids/deliveryService?id=NMNH-USNMENT01732649',
'PixelXDimension': 2000,
'PixelYDimension': 1212
}
```
## Data Fields
Specimen metadata fields conform to the Darwin Core data standard and are detailed here: https://dwc.tdwg.org. Image metadata fields conform to the Audiovisual Core data standard and are detailed here: https://ac.tdwg.org/.
## Curation Rationale
The dataset represents a portion of the U. S. National Entomological Collection. The U.S. National Entomological Collection (USNM) traces its origins in part to the acquisition of the U.S. Department of Agriculture Collection of 138,000 specimens donated in 1885. These specimens became the foundation of one of the world’s largest and most important accessible entomological collections, with over 33 million specimens taken care of by the combined staff of three government agencies: the Smithsonian Institution; the Systematic Entomology Laboratory (Agricultural Research Service, United States Department of Agriculture); and the Walter Reed Biosystematics Unit (Walter Reed Army Institute of Research). The specimens were imaged in a mass-digitization project in collaboration with the Digitization Program Office. The goal was to digitize every Bombus specimen in the collection.
## Initial Data Collection and Normalization
Bumblebee specimens were collected over a period of 150 years (earliest specimen dates from 1807, most recent specimen dates from 2020). The specimens were collected by and identified by many different individual researchers over this time. The initial images of about 49,000 specimens were taken in a rapid capture project by a dedicated team in 2014 with additional specimen images (about 25,000) taken in 2018. The labels containing the information on site/location, date of collection, collector, and identifier were removed from the insect pin. The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields. Following quality control of the transcribed data by NMNH staff, they were imported into the institutional database (EMu).
NMNH specimen data get exported to the Global Biodiversity Information Facility (GBIF) on a weekly basis through an installation of an Integrated Publishing Toolkit (IPT, https://collections.nmnh.si.edu/ipt/). Some data transformation takes place within EMu and GBIF likewise normalizes the data to meet their standards.
## Who are the source language producers?
The occurrence data were produced by humans, observed and written onto paper labels over the museum’s history, and then transcribed from paper labels pinned with the specimens upon collection.
## Annotations
The specimen occurrence data in Darwin Core fields.
## Annotation process
The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields.
## Who are the annotators?
Original collectors and identifiers were entomologists and researchers from the Smithsonian and other institutions. Collectors may not be bumblebee specialists. For data transcription, online volunteers and professional transcription service workers. Demographic data of transcribers is unknown.
## Personal and Sensitive Information
The dataset contains the names of the collectors and identifiers.
## Social Impact of Dataset
Digitized natural history collections have the potential to be used in diverse research applications in evolutionary biology, ecology, and climate change.
The dataset contains records for species listed on the U.S. Endangered Species List: Bombus affinis, Bombus franklini, and Bombus terricola.
Some site/location names could cause harm as they are insensitive or racist towards indigenous communities.
## Discussion of Biases
Estimates of species geographic ranges based on these data may not be complete. There are many reasons collectors may collect more frequently from some areas rather than others, including their own taxonomic interests, proximity to collections institutions, accessibility via roads, ability to acquire permits for a specific area, or for geopolitical reasons.
The majority of specimens in this dataset originate from North America.
Most specimens are expected to be female, because bumblebees are social insects and it is more common to find female bees.
## Other Known Limitations
As with all natural history collections data, there is the potential that some metadata are inaccurate or inconsistent given that they have been collected and recorded over the course of the past 150 years. Smithsonian staff seek to correct these errors as they are identified but the dataset as presented is a snapshot in time.
Species identifications may be inaccurate or not up-to-date based on the latest classification.
Collector names may not be consistent across records (e.g. the same person’s name may be written differently). For women’s names, which were often historically recorded as Mrs. <spouse’s name>, only the spouse’s name may appear.
Locality data may use historical place names that are no longer used.
Dates may sometimes have been recorded by original collectors inconsistently or may be incomplete (no month/day information).
For specimens collected from Brazil, specimen images are not included in the dataset.
For endangered species, locality data is not included in the dataset.
## Dataset Curators
Smithsonian National Museum of Natural History, Department of Entomology.
Jessica Bird (Data Manager in the Department of Entomology) is the main contact person for the dataset.
## Licensing Information
Public domain, Creative Commons CC0.
## Citation Information
Orrell T, Informatics Office (2023). NMNH Extant Specimen Records (USNM, US). Version 1.72. National Museum of Natural History, Smithsonian Institution. Occurrence dataset. https://collections.nmnh.si.edu/ipt/resource?r=nmnh_extant_dwc-a&v=1.72
## Contributions
Thanks to NMNH for adding this dataset.
|
MikeTrizna/bee_specimens
|
[
"license:cc0-1.0",
"region:us"
] |
2023-09-22T20:07:01+00:00
|
{"license": "cc0-1.0", "dataset_info": {"features": [{"name": "occurrenceID", "dtype": "string"}, {"name": "catalogNumber", "dtype": "string"}, {"name": "recordedBy", "dtype": "string"}, {"name": "year", "dtype": "int64"}, {"name": "month", "dtype": "int64"}, {"name": "day", "dtype": "int64"}, {"name": "country", "dtype": "string"}, {"name": "stateProvince", "dtype": "string"}, {"name": "county", "dtype": "string"}, {"name": "locality", "dtype": "string"}, {"name": "decimalLatitude", "dtype": "float64"}, {"name": "decimalLongitude", "dtype": "float64"}, {"name": "identifiedBy", "dtype": "string"}, {"name": "scientificName", "dtype": "string"}, {"name": "genus", "dtype": "string"}, {"name": "subgenus", "dtype": "string"}, {"name": "specificEpithet", "dtype": "string"}, {"name": "infraspecificEpithet", "dtype": "string"}, {"name": "scientificNameAuthorship", "dtype": "string"}, {"name": "PixelXDimension", "dtype": "float64"}, {"name": "PixelYDimension", "dtype": "float64"}, {"name": "accessURI", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26732760, "num_examples": 73387}], "download_size": 7117791, "dataset_size": 26732760}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T20:12:23+00:00
|
[] |
[] |
TAGS
#license-cc0-1.0 #region-us
|
# Dataset Card for Bee_Specimens
## Dataset Summary
The USNM Bumblebee Dataset is a natural history dataset containing, for each of 73,497 Bumblebee specimens in the family Apidae, a single image in lateral or dorsal view and a tab-separated value file with occurrence data. Occurrence data includes the species classification, the date and site/location of collection, and other metadata conforming to the Darwin Core data standard (URL). 11,421 specimens are not identified to species and these specimens are included as 'Bombus sp.' or 'Xylocopa sp.' The collecting sites/locations of the majority of specimens (55,301), have been georeferenced. The dataset is worldwide in scope, but is limited to the specimens available in the Smithsonian USNM collection.
## Languages
English
## Data Instances
A typical data point comprises of the specimen metadata and image information for a single bumblebee specimen.
An example from the dataset looks as follows:
## Data Fields
Specimen metadata fields conform to the Darwin Core data standard and are detailed here: URL. Image metadata fields conform to the Audiovisual Core data standard and are detailed here: URL
## Curation Rationale
The dataset represents a portion of the U. S. National Entomological Collection. The U.S. National Entomological Collection (USNM) traces its origins in part to the acquisition of the U.S. Department of Agriculture Collection of 138,000 specimens donated in 1885. These specimens became the foundation of one of the world’s largest and most important accessible entomological collections, with over 33 million specimens taken care of by the combined staff of three government agencies: the Smithsonian Institution; the Systematic Entomology Laboratory (Agricultural Research Service, United States Department of Agriculture); and the Walter Reed Biosystematics Unit (Walter Reed Army Institute of Research). The specimens were imaged in a mass-digitization project in collaboration with the Digitization Program Office. The goal was to digitize every Bombus specimen in the collection.
## Initial Data Collection and Normalization
Bumblebee specimens were collected over a period of 150 years (earliest specimen dates from 1807, most recent specimen dates from 2020). The specimens were collected by and identified by many different individual researchers over this time. The initial images of about 49,000 specimens were taken in a rapid capture project by a dedicated team in 2014 with additional specimen images (about 25,000) taken in 2018. The labels containing the information on site/location, date of collection, collector, and identifier were removed from the insect pin. The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields. Following quality control of the transcribed data by NMNH staff, they were imported into the institutional database (EMu).
NMNH specimen data get exported to the Global Biodiversity Information Facility (GBIF) on a weekly basis through an installation of an Integrated Publishing Toolkit (IPT, URL Some data transformation takes place within EMu and GBIF likewise normalizes the data to meet their standards.
## Who are the source language producers?
The occurrence data were produced by humans, observed and written onto paper labels over the museum’s history, and then transcribed from paper labels pinned with the specimens upon collection.
## Annotations
The specimen occurrence data in Darwin Core fields.
## Annotation process
The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields.
## Who are the annotators?
Original collectors and identifiers were entomologists and researchers from the Smithsonian and other institutions. Collectors may not be bumblebee specialists. For data transcription, online volunteers and professional transcription service workers. Demographic data of transcribers is unknown.
## Personal and Sensitive Information
The dataset contains the names of the collectors and identifiers.
## Social Impact of Dataset
Digitized natural history collections have the potential to be used in diverse research applications in evolutionary biology, ecology, and climate change.
The dataset contains records for species listed on the U.S. Endangered Species List: Bombus affinis, Bombus franklini, and Bombus terricola.
Some site/location names could cause harm as they are insensitive or racist towards indigenous communities.
## Discussion of Biases
Estimates of species geographic ranges based on these data may not be complete. There are many reasons collectors may collect more frequently from some areas rather than others, including their own taxonomic interests, proximity to collections institutions, accessibility via roads, ability to acquire permits for a specific area, or for geopolitical reasons.
The majority of specimens in this dataset originate from North America.
Most specimens are expected to be female, because bumblebees are social insects and it is more common to find female bees.
## Other Known Limitations
As with all natural history collections data, there is the potential that some metadata are inaccurate or inconsistent given that they have been collected and recorded over the course of the past 150 years. Smithsonian staff seek to correct these errors as they are identified but the dataset as presented is a snapshot in time.
Species identifications may be inaccurate or not up-to-date based on the latest classification.
Collector names may not be consistent across records (e.g. the same person’s name may be written differently). For women’s names, which were often historically recorded as Mrs. <spouse’s name>, only the spouse’s name may appear.
Locality data may use historical place names that are no longer used.
Dates may sometimes have been recorded by original collectors inconsistently or may be incomplete (no month/day information).
For specimens collected from Brazil, specimen images are not included in the dataset.
For endangered species, locality data is not included in the dataset.
## Dataset Curators
Smithsonian National Museum of Natural History, Department of Entomology.
Jessica Bird (Data Manager in the Department of Entomology) is the main contact person for the dataset.
## Licensing Information
Public domain, Creative Commons CC0.
Orrell T, Informatics Office (2023). NMNH Extant Specimen Records (USNM, US). Version 1.72. National Museum of Natural History, Smithsonian Institution. Occurrence dataset. URL
## Contributions
Thanks to NMNH for adding this dataset.
|
[
"# Dataset Card for Bee_Specimens",
"## Dataset Summary \n\nThe USNM Bumblebee Dataset is a natural history dataset containing, for each of 73,497 Bumblebee specimens in the family Apidae, a single image in lateral or dorsal view and a tab-separated value file with occurrence data. Occurrence data includes the species classification, the date and site/location of collection, and other metadata conforming to the Darwin Core data standard (URL). 11,421 specimens are not identified to species and these specimens are included as 'Bombus sp.' or 'Xylocopa sp.' The collecting sites/locations of the majority of specimens (55,301), have been georeferenced. The dataset is worldwide in scope, but is limited to the specimens available in the Smithsonian USNM collection.",
"## Languages \n\nEnglish",
"## Data Instances \n\nA typical data point comprises of the specimen metadata and image information for a single bumblebee specimen.\n\nAn example from the dataset looks as follows:",
"## Data Fields \n\nSpecimen metadata fields conform to the Darwin Core data standard and are detailed here: URL. Image metadata fields conform to the Audiovisual Core data standard and are detailed here: URL",
"## Curation Rationale \n\nThe dataset represents a portion of the U. S. National Entomological Collection. The U.S. National Entomological Collection (USNM) traces its origins in part to the acquisition of the U.S. Department of Agriculture Collection of 138,000 specimens donated in 1885. These specimens became the foundation of one of the world’s largest and most important accessible entomological collections, with over 33 million specimens taken care of by the combined staff of three government agencies: the Smithsonian Institution; the Systematic Entomology Laboratory (Agricultural Research Service, United States Department of Agriculture); and the Walter Reed Biosystematics Unit (Walter Reed Army Institute of Research). The specimens were imaged in a mass-digitization project in collaboration with the Digitization Program Office. The goal was to digitize every Bombus specimen in the collection.",
"## Initial Data Collection and Normalization \n\nBumblebee specimens were collected over a period of 150 years (earliest specimen dates from 1807, most recent specimen dates from 2020). The specimens were collected by and identified by many different individual researchers over this time. The initial images of about 49,000 specimens were taken in a rapid capture project by a dedicated team in 2014 with additional specimen images (about 25,000) taken in 2018. The labels containing the information on site/location, date of collection, collector, and identifier were removed from the insect pin. The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields. Following quality control of the transcribed data by NMNH staff, they were imported into the institutional database (EMu). \n\nNMNH specimen data get exported to the Global Biodiversity Information Facility (GBIF) on a weekly basis through an installation of an Integrated Publishing Toolkit (IPT, URL Some data transformation takes place within EMu and GBIF likewise normalizes the data to meet their standards.",
"## Who are the source language producers? \n\nThe occurrence data were produced by humans, observed and written onto paper labels over the museum’s history, and then transcribed from paper labels pinned with the specimens upon collection.",
"## Annotations \n\nThe specimen occurrence data in Darwin Core fields.",
"## Annotation process \n\nThe occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields.",
"## Who are the annotators? \n\nOriginal collectors and identifiers were entomologists and researchers from the Smithsonian and other institutions. Collectors may not be bumblebee specialists. For data transcription, online volunteers and professional transcription service workers. Demographic data of transcribers is unknown.",
"## Personal and Sensitive Information \n\nThe dataset contains the names of the collectors and identifiers.",
"## Social Impact of Dataset \n\nDigitized natural history collections have the potential to be used in diverse research applications in evolutionary biology, ecology, and climate change. \n\nThe dataset contains records for species listed on the U.S. Endangered Species List: Bombus affinis, Bombus franklini, and Bombus terricola. \n\nSome site/location names could cause harm as they are insensitive or racist towards indigenous communities.",
"## Discussion of Biases \n\nEstimates of species geographic ranges based on these data may not be complete. There are many reasons collectors may collect more frequently from some areas rather than others, including their own taxonomic interests, proximity to collections institutions, accessibility via roads, ability to acquire permits for a specific area, or for geopolitical reasons. \n\nThe majority of specimens in this dataset originate from North America. \n\nMost specimens are expected to be female, because bumblebees are social insects and it is more common to find female bees.",
"## Other Known Limitations \n\nAs with all natural history collections data, there is the potential that some metadata are inaccurate or inconsistent given that they have been collected and recorded over the course of the past 150 years. Smithsonian staff seek to correct these errors as they are identified but the dataset as presented is a snapshot in time. \n\nSpecies identifications may be inaccurate or not up-to-date based on the latest classification. \n\nCollector names may not be consistent across records (e.g. the same person’s name may be written differently). For women’s names, which were often historically recorded as Mrs. <spouse’s name>, only the spouse’s name may appear. \n\nLocality data may use historical place names that are no longer used. \n\nDates may sometimes have been recorded by original collectors inconsistently or may be incomplete (no month/day information). \n\nFor specimens collected from Brazil, specimen images are not included in the dataset. \n\nFor endangered species, locality data is not included in the dataset.",
"## Dataset Curators \n\nSmithsonian National Museum of Natural History, Department of Entomology. \n\nJessica Bird (Data Manager in the Department of Entomology) is the main contact person for the dataset.",
"## Licensing Information \n\nPublic domain, Creative Commons CC0. \n\n \n\nOrrell T, Informatics Office (2023). NMNH Extant Specimen Records (USNM, US). Version 1.72. National Museum of Natural History, Smithsonian Institution. Occurrence dataset. URL",
"## Contributions \n\nThanks to NMNH for adding this dataset."
] |
[
"TAGS\n#license-cc0-1.0 #region-us \n",
"# Dataset Card for Bee_Specimens",
"## Dataset Summary \n\nThe USNM Bumblebee Dataset is a natural history dataset containing, for each of 73,497 Bumblebee specimens in the family Apidae, a single image in lateral or dorsal view and a tab-separated value file with occurrence data. Occurrence data includes the species classification, the date and site/location of collection, and other metadata conforming to the Darwin Core data standard (URL). 11,421 specimens are not identified to species and these specimens are included as 'Bombus sp.' or 'Xylocopa sp.' The collecting sites/locations of the majority of specimens (55,301), have been georeferenced. The dataset is worldwide in scope, but is limited to the specimens available in the Smithsonian USNM collection.",
"## Languages \n\nEnglish",
"## Data Instances \n\nA typical data point comprises of the specimen metadata and image information for a single bumblebee specimen.\n\nAn example from the dataset looks as follows:",
"## Data Fields \n\nSpecimen metadata fields conform to the Darwin Core data standard and are detailed here: URL. Image metadata fields conform to the Audiovisual Core data standard and are detailed here: URL",
"## Curation Rationale \n\nThe dataset represents a portion of the U. S. National Entomological Collection. The U.S. National Entomological Collection (USNM) traces its origins in part to the acquisition of the U.S. Department of Agriculture Collection of 138,000 specimens donated in 1885. These specimens became the foundation of one of the world’s largest and most important accessible entomological collections, with over 33 million specimens taken care of by the combined staff of three government agencies: the Smithsonian Institution; the Systematic Entomology Laboratory (Agricultural Research Service, United States Department of Agriculture); and the Walter Reed Biosystematics Unit (Walter Reed Army Institute of Research). The specimens were imaged in a mass-digitization project in collaboration with the Digitization Program Office. The goal was to digitize every Bombus specimen in the collection.",
"## Initial Data Collection and Normalization \n\nBumblebee specimens were collected over a period of 150 years (earliest specimen dates from 1807, most recent specimen dates from 2020). The specimens were collected by and identified by many different individual researchers over this time. The initial images of about 49,000 specimens were taken in a rapid capture project by a dedicated team in 2014 with additional specimen images (about 25,000) taken in 2018. The labels containing the information on site/location, date of collection, collector, and identifier were removed from the insect pin. The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields. Following quality control of the transcribed data by NMNH staff, they were imported into the institutional database (EMu). \n\nNMNH specimen data get exported to the Global Biodiversity Information Facility (GBIF) on a weekly basis through an installation of an Integrated Publishing Toolkit (IPT, URL Some data transformation takes place within EMu and GBIF likewise normalizes the data to meet their standards.",
"## Who are the source language producers? \n\nThe occurrence data were produced by humans, observed and written onto paper labels over the museum’s history, and then transcribed from paper labels pinned with the specimens upon collection.",
"## Annotations \n\nThe specimen occurrence data in Darwin Core fields.",
"## Annotation process \n\nThe occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields.",
"## Who are the annotators? \n\nOriginal collectors and identifiers were entomologists and researchers from the Smithsonian and other institutions. Collectors may not be bumblebee specialists. For data transcription, online volunteers and professional transcription service workers. Demographic data of transcribers is unknown.",
"## Personal and Sensitive Information \n\nThe dataset contains the names of the collectors and identifiers.",
"## Social Impact of Dataset \n\nDigitized natural history collections have the potential to be used in diverse research applications in evolutionary biology, ecology, and climate change. \n\nThe dataset contains records for species listed on the U.S. Endangered Species List: Bombus affinis, Bombus franklini, and Bombus terricola. \n\nSome site/location names could cause harm as they are insensitive or racist towards indigenous communities.",
"## Discussion of Biases \n\nEstimates of species geographic ranges based on these data may not be complete. There are many reasons collectors may collect more frequently from some areas rather than others, including their own taxonomic interests, proximity to collections institutions, accessibility via roads, ability to acquire permits for a specific area, or for geopolitical reasons. \n\nThe majority of specimens in this dataset originate from North America. \n\nMost specimens are expected to be female, because bumblebees are social insects and it is more common to find female bees.",
"## Other Known Limitations \n\nAs with all natural history collections data, there is the potential that some metadata are inaccurate or inconsistent given that they have been collected and recorded over the course of the past 150 years. Smithsonian staff seek to correct these errors as they are identified but the dataset as presented is a snapshot in time. \n\nSpecies identifications may be inaccurate or not up-to-date based on the latest classification. \n\nCollector names may not be consistent across records (e.g. the same person’s name may be written differently). For women’s names, which were often historically recorded as Mrs. <spouse’s name>, only the spouse’s name may appear. \n\nLocality data may use historical place names that are no longer used. \n\nDates may sometimes have been recorded by original collectors inconsistently or may be incomplete (no month/day information). \n\nFor specimens collected from Brazil, specimen images are not included in the dataset. \n\nFor endangered species, locality data is not included in the dataset.",
"## Dataset Curators \n\nSmithsonian National Museum of Natural History, Department of Entomology. \n\nJessica Bird (Data Manager in the Department of Entomology) is the main contact person for the dataset.",
"## Licensing Information \n\nPublic domain, Creative Commons CC0. \n\n \n\nOrrell T, Informatics Office (2023). NMNH Extant Specimen Records (USNM, US). Version 1.72. National Museum of Natural History, Smithsonian Institution. Occurrence dataset. URL",
"## Contributions \n\nThanks to NMNH for adding this dataset."
] |
[
14,
11,
185,
4,
41,
42,
203,
251,
53,
17,
33,
73,
23,
99,
125,
245,
43,
60,
15
] |
[
"passage: TAGS\n#license-cc0-1.0 #region-us \n# Dataset Card for Bee_Specimens## Dataset Summary \n\nThe USNM Bumblebee Dataset is a natural history dataset containing, for each of 73,497 Bumblebee specimens in the family Apidae, a single image in lateral or dorsal view and a tab-separated value file with occurrence data. Occurrence data includes the species classification, the date and site/location of collection, and other metadata conforming to the Darwin Core data standard (URL). 11,421 specimens are not identified to species and these specimens are included as 'Bombus sp.' or 'Xylocopa sp.' The collecting sites/locations of the majority of specimens (55,301), have been georeferenced. The dataset is worldwide in scope, but is limited to the specimens available in the Smithsonian USNM collection.## Languages \n\nEnglish## Data Instances \n\nA typical data point comprises of the specimen metadata and image information for a single bumblebee specimen.\n\nAn example from the dataset looks as follows:## Data Fields \n\nSpecimen metadata fields conform to the Darwin Core data standard and are detailed here: URL. Image metadata fields conform to the Audiovisual Core data standard and are detailed here: URL## Curation Rationale \n\nThe dataset represents a portion of the U. S. National Entomological Collection. The U.S. National Entomological Collection (USNM) traces its origins in part to the acquisition of the U.S. Department of Agriculture Collection of 138,000 specimens donated in 1885. These specimens became the foundation of one of the world’s largest and most important accessible entomological collections, with over 33 million specimens taken care of by the combined staff of three government agencies: the Smithsonian Institution; the Systematic Entomology Laboratory (Agricultural Research Service, United States Department of Agriculture); and the Walter Reed Biosystematics Unit (Walter Reed Army Institute of Research). The specimens were imaged in a mass-digitization project in collaboration with the Digitization Program Office. The goal was to digitize every Bombus specimen in the collection.",
"passage: ## Initial Data Collection and Normalization \n\nBumblebee specimens were collected over a period of 150 years (earliest specimen dates from 1807, most recent specimen dates from 2020). The specimens were collected by and identified by many different individual researchers over this time. The initial images of about 49,000 specimens were taken in a rapid capture project by a dedicated team in 2014 with additional specimen images (about 25,000) taken in 2018. The labels containing the information on site/location, date of collection, collector, and identifier were removed from the insect pin. The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields. Following quality control of the transcribed data by NMNH staff, they were imported into the institutional database (EMu). \n\nNMNH specimen data get exported to the Global Biodiversity Information Facility (GBIF) on a weekly basis through an installation of an Integrated Publishing Toolkit (IPT, URL Some data transformation takes place within EMu and GBIF likewise normalizes the data to meet their standards.## Who are the source language producers? \n\nThe occurrence data were produced by humans, observed and written onto paper labels over the museum’s history, and then transcribed from paper labels pinned with the specimens upon collection.## Annotations \n\nThe specimen occurrence data in Darwin Core fields.## Annotation process \n\nThe occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields.## Who are the annotators? \n\nOriginal collectors and identifiers were entomologists and researchers from the Smithsonian and other institutions. Collectors may not be bumblebee specialists. For data transcription, online volunteers and professional transcription service workers. Demographic data of transcribers is unknown.## Personal and Sensitive Information \n\nThe dataset contains the names of the collectors and identifiers.## Social Impact of Dataset \n\nDigitized natural history collections have the potential to be used in diverse research applications in evolutionary biology, ecology, and climate change. \n\nThe dataset contains records for species listed on the U.S. Endangered Species List: Bombus affinis, Bombus franklini, and Bombus terricola. \n\nSome site/location names could cause harm as they are insensitive or racist towards indigenous communities.## Discussion of Biases \n\nEstimates of species geographic ranges based on these data may not be complete. There are many reasons collectors may collect more frequently from some areas rather than others, including their own taxonomic interests, proximity to collections institutions, accessibility via roads, ability to acquire permits for a specific area, or for geopolitical reasons. \n\nThe majority of specimens in this dataset originate from North America. \n\nMost specimens are expected to be female, because bumblebees are social insects and it is more common to find female bees."
] |
f5c0c0e1782aca3f83f1229066e3174aca306487
|
# Dataset Card for "instruct_control_and_lima"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aditijha/instruct_control_and_lima
|
[
"region:us"
] |
2023-09-22T20:13:22+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7084154, "num_examples": 2000}], "download_size": 4023227, "dataset_size": 7084154}}
|
2023-09-22T20:13:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "instruct_control_and_lima"
More Information needed
|
[
"# Dataset Card for \"instruct_control_and_lima\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"instruct_control_and_lima\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"instruct_control_and_lima\"\n\nMore Information needed"
] |
5974cc5defcf4a232629d7a7a4a114a55ddb90e4
|
# Dataset Card for "instruct_v1_1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aditijha/instruct_v1_1k
|
[
"region:us"
] |
2023-09-22T20:16:53+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 737647.9683021413, "num_examples": 1000}], "download_size": 387559, "dataset_size": 737647.9683021413}}
|
2023-09-22T20:16:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "instruct_v1_1k"
More Information needed
|
[
"# Dataset Card for \"instruct_v1_1k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"instruct_v1_1k\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"instruct_v1_1k\"\n\nMore Information needed"
] |
14574060b9e68e20764d83236b48588911f4ec07
|
# Dataset Card for "instruct_v1_2k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aditijha/instruct_v1_2k
|
[
"region:us"
] |
2023-09-22T20:16:54+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1475295.9366042826, "num_examples": 2000}], "download_size": 788725, "dataset_size": 1475295.9366042826}}
|
2023-09-22T20:16:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "instruct_v1_2k"
More Information needed
|
[
"# Dataset Card for \"instruct_v1_2k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"instruct_v1_2k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"instruct_v1_2k\"\n\nMore Information needed"
] |
5cc47a804624184c6acab7f6a2387db1d5c059f3
|
# Dataset Card for "instruct_v1_5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aditijha/instruct_v1_5k
|
[
"region:us"
] |
2023-09-22T20:16:55+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3688239.8415107066, "num_examples": 5000}], "download_size": 1942992, "dataset_size": 3688239.8415107066}}
|
2023-09-22T20:16:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "instruct_v1_5k"
More Information needed
|
[
"# Dataset Card for \"instruct_v1_5k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"instruct_v1_5k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"instruct_v1_5k\"\n\nMore Information needed"
] |
00b1ba4e4581d277c0a91a706977cd9864fa230c
|
# Dataset Card for "instruct_v1_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aditijha/instruct_v1_10k
|
[
"region:us"
] |
2023-09-22T20:16:56+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7376479.683021413, "num_examples": 10000}], "download_size": 3930326, "dataset_size": 7376479.683021413}}
|
2023-09-22T20:16:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "instruct_v1_10k"
More Information needed
|
[
"# Dataset Card for \"instruct_v1_10k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"instruct_v1_10k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"instruct_v1_10k\"\n\nMore Information needed"
] |
1853dbf3175766f43d99f01cc5af3460d6dae614
|
# Dataset Card for Evaluation run of Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca](https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T21:36:39.212716](https://huggingface.co/datasets/open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca/blob/main/results_2023-09-22T21-36-39.212716.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 0.0004404362416107381,
"f1_stderr": 6.976502994544788e-05,
"acc": 0.2541436464088398,
"acc_stderr": 0.007025277661412096
},
"harness|drop|3": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 0.0004404362416107381,
"f1_stderr": 6.976502994544788e-05
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5082872928176796,
"acc_stderr": 0.014050555322824192
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca
|
[
"region:us"
] |
2023-09-22T20:20:16+00:00
|
{"pretty_name": "Evaluation run of Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca", "dataset_summary": "Dataset automatically created during the evaluation run of model [Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca](https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-22T21:36:39.212716](https://huggingface.co/datasets/open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca/blob/main/results_2023-09-22T21-36-39.212716.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n \"f1\": 0.0004404362416107381,\n \"f1_stderr\": 6.976502994544788e-05,\n \"acc\": 0.2541436464088398,\n \"acc_stderr\": 0.007025277661412096\n },\n \"harness|drop|3\": {\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n \"f1\": 0.0004404362416107381,\n \"f1_stderr\": 6.976502994544788e-05\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5082872928176796,\n \"acc_stderr\": 0.014050555322824192\n }\n}\n```", "repo_url": "https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_22T21_20_12.395485", "path": ["**/details_harness|drop|3_2023-09-22T21-20-12.395485.parquet"]}, {"split": "2023_09_22T21_36_39.212716", "path": ["**/details_harness|drop|3_2023-09-22T21-36-39.212716.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-22T21-36-39.212716.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_22T21_20_12.395485", "path": ["**/details_harness|gsm8k|5_2023-09-22T21-20-12.395485.parquet"]}, {"split": "2023_09_22T21_36_39.212716", "path": ["**/details_harness|gsm8k|5_2023-09-22T21-36-39.212716.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-22T21-36-39.212716.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_22T21_20_12.395485", "path": ["**/details_harness|winogrande|5_2023-09-22T21-20-12.395485.parquet"]}, {"split": "2023_09_22T21_36_39.212716", "path": ["**/details_harness|winogrande|5_2023-09-22T21-36-39.212716.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-22T21-36-39.212716.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T21_20_12.395485", "path": ["results_2023-09-22T21-20-12.395485.parquet"]}, {"split": "2023_09_22T21_36_39.212716", "path": ["results_2023-09-22T21-36-39.212716.parquet"]}, {"split": "latest", "path": ["results_2023-09-22T21-36-39.212716.parquet"]}]}]}
|
2023-09-22T20:36:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-22T21:36:39.212716(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T21:36:39.212716(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T21:36:39.212716(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
32,
31,
180,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-22T21:36:39.212716(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
73dc97962621f0099a5665833bdab4fabe037ab3
|
https://arxiv.org/abs/2307.12981
|
ShuhongZheng/3D-LLM
|
[
"arxiv:2307.12981",
"region:us"
] |
2023-09-22T20:30:36+00:00
|
{}
|
2023-10-16T03:12:23+00:00
|
[
"2307.12981"
] |
[] |
TAGS
#arxiv-2307.12981 #region-us
|
URL
|
[] |
[
"TAGS\n#arxiv-2307.12981 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#arxiv-2307.12981 #region-us \n"
] |
cd9e741e5230d274f89f62a109c30cba130b1178
|
# Dataset Card for Evaluation run of adonlee/LLaMA_2_70B_LoRA
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/adonlee/LLaMA_2_70B_LoRA
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [adonlee/LLaMA_2_70B_LoRA](https://huggingface.co/adonlee/LLaMA_2_70B_LoRA) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T21:35:51.410251](https://huggingface.co/datasets/open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA/blob/main/results_2023-09-22T21-35-51.410251.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7077096775676626,
"acc_stderr": 0.030867670314758275,
"acc_norm": 0.7114995822621553,
"acc_norm_stderr": 0.030836833292351554,
"mc1": 0.4663402692778458,
"mc1_stderr": 0.017463793867168106,
"mc2": 0.6451679386365279,
"mc2_stderr": 0.014753028795637621
},
"harness|arc:challenge|25": {
"acc": 0.6902730375426621,
"acc_stderr": 0.013512058415238361,
"acc_norm": 0.726962457337884,
"acc_norm_stderr": 0.013019332762635743
},
"harness|hellaswag|10": {
"acc": 0.6886078470424218,
"acc_stderr": 0.004621163476949205,
"acc_norm": 0.8755228042222665,
"acc_norm_stderr": 0.003294504807555228
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.041539484047424,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.041539484047424
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8223684210526315,
"acc_stderr": 0.03110318238312338,
"acc_norm": 0.8223684210526315,
"acc_norm_stderr": 0.03110318238312338
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7358490566037735,
"acc_stderr": 0.02713429162874171,
"acc_norm": 0.7358490566037735,
"acc_norm_stderr": 0.02713429162874171
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8263888888888888,
"acc_stderr": 0.03167473383795718,
"acc_norm": 0.8263888888888888,
"acc_norm_stderr": 0.03167473383795718
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6936416184971098,
"acc_stderr": 0.03514942551267439,
"acc_norm": 0.6936416184971098,
"acc_norm_stderr": 0.03514942551267439
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.048108401480826346,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.048108401480826346
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7106382978723405,
"acc_stderr": 0.02964400657700962,
"acc_norm": 0.7106382978723405,
"acc_norm_stderr": 0.02964400657700962
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.45614035087719296,
"acc_stderr": 0.04685473041907789,
"acc_norm": 0.45614035087719296,
"acc_norm_stderr": 0.04685473041907789
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6206896551724138,
"acc_stderr": 0.04043461861916746,
"acc_norm": 0.6206896551724138,
"acc_norm_stderr": 0.04043461861916746
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.02572209706438853,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.02572209706438853
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5079365079365079,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.5079365079365079,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8096774193548387,
"acc_stderr": 0.022331707611823078,
"acc_norm": 0.8096774193548387,
"acc_norm_stderr": 0.022331707611823078
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5714285714285714,
"acc_stderr": 0.034819048444388045,
"acc_norm": 0.5714285714285714,
"acc_norm_stderr": 0.034819048444388045
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932262,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932262
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8545454545454545,
"acc_stderr": 0.027530196355066584,
"acc_norm": 0.8545454545454545,
"acc_norm_stderr": 0.027530196355066584
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.898989898989899,
"acc_stderr": 0.021469735576055343,
"acc_norm": 0.898989898989899,
"acc_norm_stderr": 0.021469735576055343
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9326424870466321,
"acc_stderr": 0.0180883938390789,
"acc_norm": 0.9326424870466321,
"acc_norm_stderr": 0.0180883938390789
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7102564102564103,
"acc_stderr": 0.023000628243687968,
"acc_norm": 0.7102564102564103,
"acc_norm_stderr": 0.023000628243687968
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.028820884666253252,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.028820884666253252
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7815126050420168,
"acc_stderr": 0.02684151432295893,
"acc_norm": 0.7815126050420168,
"acc_norm_stderr": 0.02684151432295893
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4900662251655629,
"acc_stderr": 0.04081677107248436,
"acc_norm": 0.4900662251655629,
"acc_norm_stderr": 0.04081677107248436
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9009174311926605,
"acc_stderr": 0.01280978008187893,
"acc_norm": 0.9009174311926605,
"acc_norm_stderr": 0.01280978008187893
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5833333333333334,
"acc_stderr": 0.033622774366080424,
"acc_norm": 0.5833333333333334,
"acc_norm_stderr": 0.033622774366080424
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9019607843137255,
"acc_stderr": 0.0208711184555521,
"acc_norm": 0.9019607843137255,
"acc_norm_stderr": 0.0208711184555521
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8818565400843882,
"acc_stderr": 0.02101105265987847,
"acc_norm": 0.8818565400843882,
"acc_norm_stderr": 0.02101105265987847
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7847533632286996,
"acc_stderr": 0.027584066602208274,
"acc_norm": 0.7847533632286996,
"acc_norm_stderr": 0.027584066602208274
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8473282442748091,
"acc_stderr": 0.031545216720054725,
"acc_norm": 0.8473282442748091,
"acc_norm_stderr": 0.031545216720054725
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8760330578512396,
"acc_stderr": 0.030083098716035202,
"acc_norm": 0.8760330578512396,
"acc_norm_stderr": 0.030083098716035202
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8425925925925926,
"acc_stderr": 0.035207039905179635,
"acc_norm": 0.8425925925925926,
"acc_norm_stderr": 0.035207039905179635
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8466257668711656,
"acc_stderr": 0.0283116014414386,
"acc_norm": 0.8466257668711656,
"acc_norm_stderr": 0.0283116014414386
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5714285714285714,
"acc_stderr": 0.04697113923010213,
"acc_norm": 0.5714285714285714,
"acc_norm_stderr": 0.04697113923010213
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.03760178006026621,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.03760178006026621
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9145299145299145,
"acc_stderr": 0.01831589168562585,
"acc_norm": 0.9145299145299145,
"acc_norm_stderr": 0.01831589168562585
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8697318007662835,
"acc_stderr": 0.012036729568216054,
"acc_norm": 0.8697318007662835,
"acc_norm_stderr": 0.012036729568216054
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7687861271676301,
"acc_stderr": 0.022698657167855713,
"acc_norm": 0.7687861271676301,
"acc_norm_stderr": 0.022698657167855713
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.646927374301676,
"acc_stderr": 0.01598420454526858,
"acc_norm": 0.646927374301676,
"acc_norm_stderr": 0.01598420454526858
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7516339869281046,
"acc_stderr": 0.024739981355113592,
"acc_norm": 0.7516339869281046,
"acc_norm_stderr": 0.024739981355113592
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7684887459807074,
"acc_stderr": 0.023956532766639133,
"acc_norm": 0.7684887459807074,
"acc_norm_stderr": 0.023956532766639133
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8271604938271605,
"acc_stderr": 0.02103851777015737,
"acc_norm": 0.8271604938271605,
"acc_norm_stderr": 0.02103851777015737
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.599290780141844,
"acc_stderr": 0.029233465745573096,
"acc_norm": 0.599290780141844,
"acc_norm_stderr": 0.029233465745573096
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5814863102998696,
"acc_stderr": 0.012599505608336482,
"acc_norm": 0.5814863102998696,
"acc_norm_stderr": 0.012599505608336482
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7316176470588235,
"acc_stderr": 0.026917481224377204,
"acc_norm": 0.7316176470588235,
"acc_norm_stderr": 0.026917481224377204
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7679738562091504,
"acc_stderr": 0.017077373377856933,
"acc_norm": 0.7679738562091504,
"acc_norm_stderr": 0.017077373377856933
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.041723430387053825,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.041723430387053825
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8081632653061225,
"acc_stderr": 0.025206963154225395,
"acc_norm": 0.8081632653061225,
"acc_norm_stderr": 0.025206963154225395
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8756218905472637,
"acc_stderr": 0.023335401790166323,
"acc_norm": 0.8756218905472637,
"acc_norm_stderr": 0.023335401790166323
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197769,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197769
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8713450292397661,
"acc_stderr": 0.02567934272327692,
"acc_norm": 0.8713450292397661,
"acc_norm_stderr": 0.02567934272327692
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4663402692778458,
"mc1_stderr": 0.017463793867168106,
"mc2": 0.6451679386365279,
"mc2_stderr": 0.014753028795637621
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA
|
[
"region:us"
] |
2023-09-22T20:36:15+00:00
|
{"pretty_name": "Evaluation run of adonlee/LLaMA_2_70B_LoRA", "dataset_summary": "Dataset automatically created during the evaluation run of model [adonlee/LLaMA_2_70B_LoRA](https://huggingface.co/adonlee/LLaMA_2_70B_LoRA) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-22T21:35:51.410251](https://huggingface.co/datasets/open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA/blob/main/results_2023-09-22T21-35-51.410251.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7077096775676626,\n \"acc_stderr\": 0.030867670314758275,\n \"acc_norm\": 0.7114995822621553,\n \"acc_norm_stderr\": 0.030836833292351554,\n \"mc1\": 0.4663402692778458,\n \"mc1_stderr\": 0.017463793867168106,\n \"mc2\": 0.6451679386365279,\n \"mc2_stderr\": 0.014753028795637621\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6902730375426621,\n \"acc_stderr\": 0.013512058415238361,\n \"acc_norm\": 0.726962457337884,\n \"acc_norm_stderr\": 0.013019332762635743\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6886078470424218,\n \"acc_stderr\": 0.004621163476949205,\n \"acc_norm\": 0.8755228042222665,\n \"acc_norm_stderr\": 0.003294504807555228\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n \"acc_stderr\": 0.041539484047424,\n \"acc_norm\": 0.6370370370370371,\n \"acc_norm_stderr\": 0.041539484047424\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.8223684210526315,\n \"acc_stderr\": 0.03110318238312338,\n \"acc_norm\": 0.8223684210526315,\n \"acc_norm_stderr\": 0.03110318238312338\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7358490566037735,\n \"acc_stderr\": 0.02713429162874171,\n \"acc_norm\": 0.7358490566037735,\n \"acc_norm_stderr\": 0.02713429162874171\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8263888888888888,\n \"acc_stderr\": 0.03167473383795718,\n \"acc_norm\": 0.8263888888888888,\n \"acc_norm_stderr\": 0.03167473383795718\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6936416184971098,\n \"acc_stderr\": 0.03514942551267439,\n \"acc_norm\": 0.6936416184971098,\n \"acc_norm_stderr\": 0.03514942551267439\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.048108401480826346,\n \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.048108401480826346\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932263\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.7106382978723405,\n \"acc_stderr\": 0.02964400657700962,\n \"acc_norm\": 0.7106382978723405,\n \"acc_norm_stderr\": 0.02964400657700962\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.45614035087719296,\n \"acc_stderr\": 0.04685473041907789,\n \"acc_norm\": 0.45614035087719296,\n \"acc_norm_stderr\": 0.04685473041907789\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6206896551724138,\n \"acc_stderr\": 0.04043461861916746,\n \"acc_norm\": 0.6206896551724138,\n \"acc_norm_stderr\": 0.04043461861916746\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.47619047619047616,\n \"acc_stderr\": 0.02572209706438853,\n \"acc_norm\": 0.47619047619047616,\n \"acc_norm_stderr\": 0.02572209706438853\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5079365079365079,\n \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.5079365079365079,\n \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8096774193548387,\n \"acc_stderr\": 0.022331707611823078,\n \"acc_norm\": 0.8096774193548387,\n \"acc_norm_stderr\": 0.022331707611823078\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5714285714285714,\n \"acc_stderr\": 0.034819048444388045,\n \"acc_norm\": 0.5714285714285714,\n \"acc_norm_stderr\": 0.034819048444388045\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932262,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932262\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8545454545454545,\n \"acc_stderr\": 0.027530196355066584,\n \"acc_norm\": 0.8545454545454545,\n \"acc_norm_stderr\": 0.027530196355066584\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.898989898989899,\n \"acc_stderr\": 0.021469735576055343,\n \"acc_norm\": 0.898989898989899,\n \"acc_norm_stderr\": 0.021469735576055343\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9326424870466321,\n \"acc_stderr\": 0.0180883938390789,\n \"acc_norm\": 0.9326424870466321,\n \"acc_norm_stderr\": 0.0180883938390789\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.7102564102564103,\n \"acc_stderr\": 0.023000628243687968,\n \"acc_norm\": 0.7102564102564103,\n \"acc_norm_stderr\": 0.023000628243687968\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.337037037037037,\n \"acc_stderr\": 0.028820884666253252,\n \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.028820884666253252\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7815126050420168,\n \"acc_stderr\": 0.02684151432295893,\n \"acc_norm\": 0.7815126050420168,\n \"acc_norm_stderr\": 0.02684151432295893\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.4900662251655629,\n \"acc_stderr\": 0.04081677107248436,\n \"acc_norm\": 0.4900662251655629,\n \"acc_norm_stderr\": 0.04081677107248436\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9009174311926605,\n \"acc_stderr\": 0.01280978008187893,\n \"acc_norm\": 0.9009174311926605,\n \"acc_norm_stderr\": 0.01280978008187893\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5833333333333334,\n \"acc_stderr\": 0.033622774366080424,\n \"acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.033622774366080424\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9019607843137255,\n \"acc_stderr\": 0.0208711184555521,\n \"acc_norm\": 0.9019607843137255,\n \"acc_norm_stderr\": 0.0208711184555521\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8818565400843882,\n \"acc_stderr\": 0.02101105265987847,\n \"acc_norm\": 0.8818565400843882,\n \"acc_norm_stderr\": 0.02101105265987847\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7847533632286996,\n \"acc_stderr\": 0.027584066602208274,\n \"acc_norm\": 0.7847533632286996,\n \"acc_norm_stderr\": 0.027584066602208274\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8473282442748091,\n \"acc_stderr\": 0.031545216720054725,\n \"acc_norm\": 0.8473282442748091,\n \"acc_norm_stderr\": 0.031545216720054725\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8760330578512396,\n \"acc_stderr\": 0.030083098716035202,\n \"acc_norm\": 0.8760330578512396,\n \"acc_norm_stderr\": 0.030083098716035202\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8425925925925926,\n \"acc_stderr\": 0.035207039905179635,\n \"acc_norm\": 0.8425925925925926,\n \"acc_norm_stderr\": 0.035207039905179635\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8466257668711656,\n \"acc_stderr\": 0.0283116014414386,\n \"acc_norm\": 0.8466257668711656,\n \"acc_norm_stderr\": 0.0283116014414386\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5714285714285714,\n \"acc_stderr\": 0.04697113923010213,\n \"acc_norm\": 0.5714285714285714,\n \"acc_norm_stderr\": 0.04697113923010213\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.03760178006026621,\n \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.03760178006026621\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9145299145299145,\n \"acc_stderr\": 0.01831589168562585,\n \"acc_norm\": 0.9145299145299145,\n \"acc_norm_stderr\": 0.01831589168562585\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8697318007662835,\n \"acc_stderr\": 0.012036729568216054,\n \"acc_norm\": 0.8697318007662835,\n \"acc_norm_stderr\": 0.012036729568216054\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7687861271676301,\n \"acc_stderr\": 0.022698657167855713,\n \"acc_norm\": 0.7687861271676301,\n \"acc_norm_stderr\": 0.022698657167855713\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.646927374301676,\n \"acc_stderr\": 0.01598420454526858,\n \"acc_norm\": 0.646927374301676,\n \"acc_norm_stderr\": 0.01598420454526858\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7516339869281046,\n \"acc_stderr\": 0.024739981355113592,\n \"acc_norm\": 0.7516339869281046,\n \"acc_norm_stderr\": 0.024739981355113592\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7684887459807074,\n \"acc_stderr\": 0.023956532766639133,\n \"acc_norm\": 0.7684887459807074,\n \"acc_norm_stderr\": 0.023956532766639133\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8271604938271605,\n \"acc_stderr\": 0.02103851777015737,\n \"acc_norm\": 0.8271604938271605,\n \"acc_norm_stderr\": 0.02103851777015737\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.599290780141844,\n \"acc_stderr\": 0.029233465745573096,\n \"acc_norm\": 0.599290780141844,\n \"acc_norm_stderr\": 0.029233465745573096\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5814863102998696,\n \"acc_stderr\": 0.012599505608336482,\n \"acc_norm\": 0.5814863102998696,\n \"acc_norm_stderr\": 0.012599505608336482\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7316176470588235,\n \"acc_stderr\": 0.026917481224377204,\n \"acc_norm\": 0.7316176470588235,\n \"acc_norm_stderr\": 0.026917481224377204\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7679738562091504,\n \"acc_stderr\": 0.017077373377856933,\n \"acc_norm\": 0.7679738562091504,\n \"acc_norm_stderr\": 0.017077373377856933\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7454545454545455,\n \"acc_stderr\": 0.041723430387053825,\n \"acc_norm\": 0.7454545454545455,\n \"acc_norm_stderr\": 0.041723430387053825\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8081632653061225,\n \"acc_stderr\": 0.025206963154225395,\n \"acc_norm\": 0.8081632653061225,\n \"acc_norm_stderr\": 0.025206963154225395\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8756218905472637,\n \"acc_stderr\": 0.023335401790166323,\n \"acc_norm\": 0.8756218905472637,\n \"acc_norm_stderr\": 0.023335401790166323\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197769,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197769\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8713450292397661,\n \"acc_stderr\": 0.02567934272327692,\n \"acc_norm\": 0.8713450292397661,\n \"acc_norm_stderr\": 0.02567934272327692\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4663402692778458,\n \"mc1_stderr\": 0.017463793867168106,\n \"mc2\": 0.6451679386365279,\n \"mc2_stderr\": 0.014753028795637621\n }\n}\n```", "repo_url": "https://huggingface.co/adonlee/LLaMA_2_70B_LoRA", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|arc:challenge|25_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hellaswag|10_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-22T21-35-51.410251.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-22T21-35-51.410251.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T21_35_51.410251", "path": ["results_2023-09-22T21-35-51.410251.parquet"]}, {"split": "latest", "path": ["results_2023-09-22T21-35-51.410251.parquet"]}]}]}
|
2023-09-22T20:37:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of adonlee/LLaMA_2_70B_LoRA
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model adonlee/LLaMA_2_70B_LoRA on the Open LLM Leaderboard.
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-22T21:35:51.410251(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of adonlee/LLaMA_2_70B_LoRA",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model adonlee/LLaMA_2_70B_LoRA on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T21:35:51.410251(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of adonlee/LLaMA_2_70B_LoRA",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model adonlee/LLaMA_2_70B_LoRA on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T21:35:51.410251(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
24,
31,
172,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of adonlee/LLaMA_2_70B_LoRA## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model adonlee/LLaMA_2_70B_LoRA on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-22T21:35:51.410251(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
40e1e2a656d3baee7cc384d396df8a2214f68967
|
# Dataset Card for "receipt_cord_ocr_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mychen76/receipt_cord_ocr_v2
|
[
"region:us"
] |
2023-09-22T21:20:48+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "id", "dtype": "string"}, {"name": "parsed_data", "dtype": "string"}, {"name": "raw_data", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 119205560.0, "num_examples": 800}, {"name": "test", "num_bytes": 15152937.0, "num_examples": 100}, {"name": "valid", "num_bytes": 15152937.0, "num_examples": 100}], "download_size": 147437931, "dataset_size": 149511434.0}}
|
2023-09-22T21:21:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "receipt_cord_ocr_v2"
More Information needed
|
[
"# Dataset Card for \"receipt_cord_ocr_v2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"receipt_cord_ocr_v2\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"receipt_cord_ocr_v2\"\n\nMore Information needed"
] |
f51b82847c1d1cb0fc11a317b5c5413b6ed0747f
|
# DiscoEval Benchmark Datasets
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Sources](#dataset-sources)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Benchmark Creators](#benchmark-creators)
- [Citation Information](#citation-information)
- [Loading Data Examples](#loading-data-examples)
- [Loading Data for Sentence Positioning Task with the Arxiv data source](#loading-data-for-sentence-positioning-task-with-the-arxiv-data-source)
## Dataset Description
- **Repository:** [DiscoEval repository](https://github.com/ZeweiChu/DiscoEval)
- **Paper:** [Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations](https://arxiv.org/pdf/1909.00142)
### Dataset Summary
The DiscoEval is an English-language Benchmark that contains a test suite of 7
tasks to evaluate whether sentence representations include semantic information
relevant to discourse processing. The benchmark datasets offer a collection of
tasks designed to evaluate natural language understanding models in the context
of discourse analysis and coherence.
### Dataset Sources
- **Arxiv**: A repository of scientific papers and research articles.
- **Wikipedia**: An extensive online encyclopedia with articles on diverse topics.
- **Rocstory**: A dataset consisting of fictional stories.
- **Ubuntu IRC channel**: Conversational data extracted from the Ubuntu Internet Relay Chat (IRC) channel.
- **PeerRead**: A dataset of scientific papers frequently used for discourse-related tasks.
- **RST Discourse Treebank**: A dataset annotated with Rhetorical Structure Theory (RST) discourse relations.
- **Penn Discourse Treebank**: Another dataset with annotated discourse relations, facilitating the study of discourse structure.
### Supported Tasks
1. **Sentence Positioning**
- **Datasets Sources**: Arxiv, Wikipedia, Rocstory
- **Description**: Determine the correct placement of a sentence within a given context of five sentences. To form the input when training classifiers encode the five sentences to vector representations \\(x_i\\). As input to the classfier we include \\(x_1\\) and the contcatination of \\(x_1 - x_i\\) for all \\(i\\): \\([x_1, x_1 - x_2, x_1-x_3,x_1-x_4,x_1-x_5]\\)
2. **Binary Sentence Ordering**
- **Datasets Sources**: Arxiv, Wikipedia, Rocstory
- **Description**: Determining whether two sentences are in the correct consecutive order, identifying the more coherent structure. To form the input when training classifiers, we concatenate the embeddings of both sentences with their element-wise difference: \\([x_1, x_2, x_1-x_2]\\)
3. **Discourse Coherence**
- **Datasets Sources**: Ubuntu IRC channel, Wikipedia
- **Description**: Determine whether a sequence of six sentences form a coherent paragraph. To form the input when training classifiers, encode all sentences to vector representations and concatenate all of them: \\([x_1, x_2, x_3, x_4, x_5, x_6]\\)
4. **Sentence Section Prediction**
- **Datasets Sources**: Constructed from PeerRead
- **Description**: Determine the section or category to which a sentence belongs within a scientific paper, based on the content and context. To form the input when training classifiers, simply input the sentence embedding.
5. **Discourse Relations**
- **Datasets Sources**: RST Discourse Treebank, Penn Discourse Treebank
- **Description**: Identify and classify discourse relations between sentences or text segments, helping to reveal the structure and flow of discourse. To form the input when training classifiers, refer to the [original paper](https://arxiv.org/pdf/1909.00142) for instructions
### Languages
The text in all datasets is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
All tasks are classification tasks, and they differ by the number of sentences per example and the type of label.
An example from the Sentence Positioning task would look as follows:
```
{'sentence_1': 'Dan was overweight as well.',
'sentence_2': 'Dan's parents were overweight.',
'sentence_3': 'The doctors told his parents it was unhealthy.',
'sentence_4': 'His parents understood and decided to make a change.',
'sentence_5': 'They got themselves and Dan on a diet.'
'label': '1'
}
```
The label is '1' since the first sentence should go at position number 1 (counting from zero)
An example from the Binary Sentence Ordering task would look as follows:
```
{'sentence_1': 'When she walked in, she felt awkward.',
'sentence_2': 'Janet decided to go to her high school's party.',
'label': '0'
}
```
The label is '0' because this is not the correct order of the sentences. It should be sentence_2 and then sentence_1.
For more examples, you can refer the [original paper]((https://arxiv.org/pdf/1909.00142).
### Data Fields
In this benchmark, all data fields are string, including the labels.
### Data Splits
The data is split into training, validation and test set for each of the tasks in the benchmark.
| Task and Dataset | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Sentence Positioning: Arxiv| 10000 | 4000 | 4000|
| Sentence Positioning: Rocstory| 10000 | 4000 | 4000|
| Sentence Positioning: Wiki| 10000 | 4000 | 4000|
| Binary Sentence Ordering: Arxiv| 20000 | 8000 | 8000|
| Binary Sentence Ordering: Rocstory| 20000 | 8000 | 8000|
| Binary Sentence Ordering: Wiki| 20000 | 8000 | 8000|
| Discourse Coherence: Chat| 5816 | 1834 | 2418|
| Discourse Coherence: Wiki| 10000 | 4000 | 4000|
| Sentence Section Prediction | 10000 | 4000 | 4000 |
| Discourse Relation: Penn Discourse Tree Bank: Implicit | 8693 | 2972 | 3024 |
| Discourse Relation: Penn Discourse Tree Bank: Explicit | 9383 | 3613 | 3758 |
| Discourse Relation: RST Discourse Tree Bank | 17051 | 2045 | 2308 |
## Additional Information
### Benchmark Creators
This benchmark was created by Mingda Chen, Zewei Chu and Kevin Gimpel during work done at the University of Chicago and the Toyota Technologival Institute at Chicago.
### Citation Information
```
@inproceedings{mchen-discoeval-19,
title = {Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations},
author = {Mingda Chen and Zewei Chu and Kevin Gimpel},
booktitle = {Proc. of {EMNLP}},
year={2019}
}
```
## Loading Data Examples
### Loading Data for Sentence Positioning Task with the Arxiv data source
```python
from datasets import load_dataset
# Load the Sentence Positioning dataset
dataset = load_dataset(path="OfekGlick/DiscoEval", name="SParxiv")
# Access the train, validation, and test splits
train_data = dataset["train"]
validation_data = dataset["validation"]
test_data = dataset["test"]
# Example usage: Print the first few training examples
for example in train_data[:5]:
print(example)
```
The other possible inputs for the `name` parameter are:
`SParxiv`, `SProcstory`, `SPwiki`, `SSPabs`, `PDTB-I`, `PDTB-E`, `BSOarxiv`, `BSOrocstory`, `BSOwiki`, `DCchat`, `DCwiki`, `RST`
|
OfekGlick/DiscoEval
|
[
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:bsd",
"Discourse",
"Discourse Evaluation",
"NLP",
"arxiv:1909.00142",
"region:us"
] |
2023-09-22T22:22:52+00:00
|
{"language": ["en"], "license": "bsd", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "pretty_name": "DiscoEval", "tags": ["Discourse", "Discourse Evaluation", "NLP"]}
|
2023-11-06T14:06:49+00:00
|
[
"1909.00142"
] |
[
"en"
] |
TAGS
#task_categories-text-classification #size_categories-100K<n<1M #language-English #license-bsd #Discourse #Discourse Evaluation #NLP #arxiv-1909.00142 #region-us
|
DiscoEval Benchmark Datasets
============================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Dataset Sources
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Additional Information
+ Benchmark Creators
+ Citation Information
* Loading Data Examples
+ Loading Data for Sentence Positioning Task with the Arxiv data source
Dataset Description
-------------------
* Repository: DiscoEval repository
* Paper: Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations
### Dataset Summary
The DiscoEval is an English-language Benchmark that contains a test suite of 7
tasks to evaluate whether sentence representations include semantic information
relevant to discourse processing. The benchmark datasets offer a collection of
tasks designed to evaluate natural language understanding models in the context
of discourse analysis and coherence.
### Dataset Sources
* Arxiv: A repository of scientific papers and research articles.
* Wikipedia: An extensive online encyclopedia with articles on diverse topics.
* Rocstory: A dataset consisting of fictional stories.
* Ubuntu IRC channel: Conversational data extracted from the Ubuntu Internet Relay Chat (IRC) channel.
* PeerRead: A dataset of scientific papers frequently used for discourse-related tasks.
* RST Discourse Treebank: A dataset annotated with Rhetorical Structure Theory (RST) discourse relations.
* Penn Discourse Treebank: Another dataset with annotated discourse relations, facilitating the study of discourse structure.
### Supported Tasks
1. Sentence Positioning
* Datasets Sources: Arxiv, Wikipedia, Rocstory
* Description: Determine the correct placement of a sentence within a given context of five sentences. To form the input when training classifiers encode the five sentences to vector representations \(x\_i\). As input to the classfier we include \(x\_1\) and the contcatination of \(x\_1 - x\_i\) for all \(i\): \([x\_1, x\_1 - x\_2, x\_1-x\_3,x\_1-x\_4,x\_1-x\_5]\)
2. Binary Sentence Ordering
* Datasets Sources: Arxiv, Wikipedia, Rocstory
* Description: Determining whether two sentences are in the correct consecutive order, identifying the more coherent structure. To form the input when training classifiers, we concatenate the embeddings of both sentences with their element-wise difference: \([x\_1, x\_2, x\_1-x\_2]\)
3. Discourse Coherence
* Datasets Sources: Ubuntu IRC channel, Wikipedia
* Description: Determine whether a sequence of six sentences form a coherent paragraph. To form the input when training classifiers, encode all sentences to vector representations and concatenate all of them: \([x\_1, x\_2, x\_3, x\_4, x\_5, x\_6]\)
4. Sentence Section Prediction
* Datasets Sources: Constructed from PeerRead
* Description: Determine the section or category to which a sentence belongs within a scientific paper, based on the content and context. To form the input when training classifiers, simply input the sentence embedding.
5. Discourse Relations
* Datasets Sources: RST Discourse Treebank, Penn Discourse Treebank
* Description: Identify and classify discourse relations between sentences or text segments, helping to reveal the structure and flow of discourse. To form the input when training classifiers, refer to the original paper for instructions
### Languages
The text in all datasets is in English. The associated BCP-47 code is 'en'.
Dataset Structure
-----------------
### Data Instances
All tasks are classification tasks, and they differ by the number of sentences per example and the type of label.
An example from the Sentence Positioning task would look as follows:
The label is '1' since the first sentence should go at position number 1 (counting from zero)
An example from the Binary Sentence Ordering task would look as follows:
The label is '0' because this is not the correct order of the sentences. It should be sentence\_2 and then sentence\_1.
For more examples, you can refer the original paper.
### Data Fields
In this benchmark, all data fields are string, including the labels.
### Data Splits
The data is split into training, validation and test set for each of the tasks in the benchmark.
Additional Information
----------------------
### Benchmark Creators
This benchmark was created by Mingda Chen, Zewei Chu and Kevin Gimpel during work done at the University of Chicago and the Toyota Technologival Institute at Chicago.
Loading Data Examples
---------------------
### Loading Data for Sentence Positioning Task with the Arxiv data source
The other possible inputs for the 'name' parameter are:
'SParxiv', 'SProcstory', 'SPwiki', 'SSPabs', 'PDTB-I', 'PDTB-E', 'BSOarxiv', 'BSOrocstory', 'BSOwiki', 'DCchat', 'DCwiki', 'RST'
|
[
"### Dataset Summary\n\n\nThe DiscoEval is an English-language Benchmark that contains a test suite of 7\ntasks to evaluate whether sentence representations include semantic information\nrelevant to discourse processing. The benchmark datasets offer a collection of\ntasks designed to evaluate natural language understanding models in the context\nof discourse analysis and coherence.",
"### Dataset Sources\n\n\n* Arxiv: A repository of scientific papers and research articles.\n* Wikipedia: An extensive online encyclopedia with articles on diverse topics.\n* Rocstory: A dataset consisting of fictional stories.\n* Ubuntu IRC channel: Conversational data extracted from the Ubuntu Internet Relay Chat (IRC) channel.\n* PeerRead: A dataset of scientific papers frequently used for discourse-related tasks.\n* RST Discourse Treebank: A dataset annotated with Rhetorical Structure Theory (RST) discourse relations.\n* Penn Discourse Treebank: Another dataset with annotated discourse relations, facilitating the study of discourse structure.",
"### Supported Tasks\n\n\n1. Sentence Positioning\n\n\n\t* Datasets Sources: Arxiv, Wikipedia, Rocstory\n\t* Description: Determine the correct placement of a sentence within a given context of five sentences. To form the input when training classifiers encode the five sentences to vector representations \\(x\\_i\\). As input to the classfier we include \\(x\\_1\\) and the contcatination of \\(x\\_1 - x\\_i\\) for all \\(i\\): \\([x\\_1, x\\_1 - x\\_2, x\\_1-x\\_3,x\\_1-x\\_4,x\\_1-x\\_5]\\)\n2. Binary Sentence Ordering\n\n\n\t* Datasets Sources: Arxiv, Wikipedia, Rocstory\n\t* Description: Determining whether two sentences are in the correct consecutive order, identifying the more coherent structure. To form the input when training classifiers, we concatenate the embeddings of both sentences with their element-wise difference: \\([x\\_1, x\\_2, x\\_1-x\\_2]\\)\n3. Discourse Coherence\n\n\n\t* Datasets Sources: Ubuntu IRC channel, Wikipedia\n\t* Description: Determine whether a sequence of six sentences form a coherent paragraph. To form the input when training classifiers, encode all sentences to vector representations and concatenate all of them: \\([x\\_1, x\\_2, x\\_3, x\\_4, x\\_5, x\\_6]\\)\n4. Sentence Section Prediction\n\n\n\t* Datasets Sources: Constructed from PeerRead\n\t* Description: Determine the section or category to which a sentence belongs within a scientific paper, based on the content and context. To form the input when training classifiers, simply input the sentence embedding.\n5. Discourse Relations\n\n\n\t* Datasets Sources: RST Discourse Treebank, Penn Discourse Treebank\n\t* Description: Identify and classify discourse relations between sentences or text segments, helping to reveal the structure and flow of discourse. To form the input when training classifiers, refer to the original paper for instructions",
"### Languages\n\n\nThe text in all datasets is in English. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAll tasks are classification tasks, and they differ by the number of sentences per example and the type of label.\n\n\nAn example from the Sentence Positioning task would look as follows:\n\n\nThe label is '1' since the first sentence should go at position number 1 (counting from zero)\n\n\nAn example from the Binary Sentence Ordering task would look as follows:\n\n\nThe label is '0' because this is not the correct order of the sentences. It should be sentence\\_2 and then sentence\\_1.\n\n\nFor more examples, you can refer the original paper.",
"### Data Fields\n\n\nIn this benchmark, all data fields are string, including the labels.",
"### Data Splits\n\n\nThe data is split into training, validation and test set for each of the tasks in the benchmark.\n\n\n\nAdditional Information\n----------------------",
"### Benchmark Creators\n\n\nThis benchmark was created by Mingda Chen, Zewei Chu and Kevin Gimpel during work done at the University of Chicago and the Toyota Technologival Institute at Chicago.\n\n\nLoading Data Examples\n---------------------",
"### Loading Data for Sentence Positioning Task with the Arxiv data source\n\n\nThe other possible inputs for the 'name' parameter are:\n'SParxiv', 'SProcstory', 'SPwiki', 'SSPabs', 'PDTB-I', 'PDTB-E', 'BSOarxiv', 'BSOrocstory', 'BSOwiki', 'DCchat', 'DCwiki', 'RST'"
] |
[
"TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-English #license-bsd #Discourse #Discourse Evaluation #NLP #arxiv-1909.00142 #region-us \n",
"### Dataset Summary\n\n\nThe DiscoEval is an English-language Benchmark that contains a test suite of 7\ntasks to evaluate whether sentence representations include semantic information\nrelevant to discourse processing. The benchmark datasets offer a collection of\ntasks designed to evaluate natural language understanding models in the context\nof discourse analysis and coherence.",
"### Dataset Sources\n\n\n* Arxiv: A repository of scientific papers and research articles.\n* Wikipedia: An extensive online encyclopedia with articles on diverse topics.\n* Rocstory: A dataset consisting of fictional stories.\n* Ubuntu IRC channel: Conversational data extracted from the Ubuntu Internet Relay Chat (IRC) channel.\n* PeerRead: A dataset of scientific papers frequently used for discourse-related tasks.\n* RST Discourse Treebank: A dataset annotated with Rhetorical Structure Theory (RST) discourse relations.\n* Penn Discourse Treebank: Another dataset with annotated discourse relations, facilitating the study of discourse structure.",
"### Supported Tasks\n\n\n1. Sentence Positioning\n\n\n\t* Datasets Sources: Arxiv, Wikipedia, Rocstory\n\t* Description: Determine the correct placement of a sentence within a given context of five sentences. To form the input when training classifiers encode the five sentences to vector representations \\(x\\_i\\). As input to the classfier we include \\(x\\_1\\) and the contcatination of \\(x\\_1 - x\\_i\\) for all \\(i\\): \\([x\\_1, x\\_1 - x\\_2, x\\_1-x\\_3,x\\_1-x\\_4,x\\_1-x\\_5]\\)\n2. Binary Sentence Ordering\n\n\n\t* Datasets Sources: Arxiv, Wikipedia, Rocstory\n\t* Description: Determining whether two sentences are in the correct consecutive order, identifying the more coherent structure. To form the input when training classifiers, we concatenate the embeddings of both sentences with their element-wise difference: \\([x\\_1, x\\_2, x\\_1-x\\_2]\\)\n3. Discourse Coherence\n\n\n\t* Datasets Sources: Ubuntu IRC channel, Wikipedia\n\t* Description: Determine whether a sequence of six sentences form a coherent paragraph. To form the input when training classifiers, encode all sentences to vector representations and concatenate all of them: \\([x\\_1, x\\_2, x\\_3, x\\_4, x\\_5, x\\_6]\\)\n4. Sentence Section Prediction\n\n\n\t* Datasets Sources: Constructed from PeerRead\n\t* Description: Determine the section or category to which a sentence belongs within a scientific paper, based on the content and context. To form the input when training classifiers, simply input the sentence embedding.\n5. Discourse Relations\n\n\n\t* Datasets Sources: RST Discourse Treebank, Penn Discourse Treebank\n\t* Description: Identify and classify discourse relations between sentences or text segments, helping to reveal the structure and flow of discourse. To form the input when training classifiers, refer to the original paper for instructions",
"### Languages\n\n\nThe text in all datasets is in English. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAll tasks are classification tasks, and they differ by the number of sentences per example and the type of label.\n\n\nAn example from the Sentence Positioning task would look as follows:\n\n\nThe label is '1' since the first sentence should go at position number 1 (counting from zero)\n\n\nAn example from the Binary Sentence Ordering task would look as follows:\n\n\nThe label is '0' because this is not the correct order of the sentences. It should be sentence\\_2 and then sentence\\_1.\n\n\nFor more examples, you can refer the original paper.",
"### Data Fields\n\n\nIn this benchmark, all data fields are string, including the labels.",
"### Data Splits\n\n\nThe data is split into training, validation and test set for each of the tasks in the benchmark.\n\n\n\nAdditional Information\n----------------------",
"### Benchmark Creators\n\n\nThis benchmark was created by Mingda Chen, Zewei Chu and Kevin Gimpel during work done at the University of Chicago and the Toyota Technologival Institute at Chicago.\n\n\nLoading Data Examples\n---------------------",
"### Loading Data for Sentence Positioning Task with the Arxiv data source\n\n\nThe other possible inputs for the 'name' parameter are:\n'SParxiv', 'SProcstory', 'SPwiki', 'SSPabs', 'PDTB-I', 'PDTB-E', 'BSOarxiv', 'BSOrocstory', 'BSOwiki', 'DCchat', 'DCwiki', 'RST'"
] |
[
60,
78,
162,
515,
33,
129,
21,
34,
48,
104
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-English #license-bsd #Discourse #Discourse Evaluation #NLP #arxiv-1909.00142 #region-us \n### Dataset Summary\n\n\nThe DiscoEval is an English-language Benchmark that contains a test suite of 7\ntasks to evaluate whether sentence representations include semantic information\nrelevant to discourse processing. The benchmark datasets offer a collection of\ntasks designed to evaluate natural language understanding models in the context\nof discourse analysis and coherence.### Dataset Sources\n\n\n* Arxiv: A repository of scientific papers and research articles.\n* Wikipedia: An extensive online encyclopedia with articles on diverse topics.\n* Rocstory: A dataset consisting of fictional stories.\n* Ubuntu IRC channel: Conversational data extracted from the Ubuntu Internet Relay Chat (IRC) channel.\n* PeerRead: A dataset of scientific papers frequently used for discourse-related tasks.\n* RST Discourse Treebank: A dataset annotated with Rhetorical Structure Theory (RST) discourse relations.\n* Penn Discourse Treebank: Another dataset with annotated discourse relations, facilitating the study of discourse structure."
] |
ad28908718507939d2ee589dfdbb6c1795997dd9
|
# Dataset Card for "TinyStories2-ascii-bpe-2k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
cyrilzhang/TinyStories2-ascii-bpe-2k
|
[
"region:us"
] |
2023-09-22T22:23:58+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 2369808200, "num_examples": 578002}, {"name": "validation", "num_bytes": 23866100, "num_examples": 5821}], "download_size": 827963790, "dataset_size": 2393674300}}
|
2023-09-22T22:24:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "TinyStories2-ascii-bpe-2k"
More Information needed
|
[
"# Dataset Card for \"TinyStories2-ascii-bpe-2k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"TinyStories2-ascii-bpe-2k\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"TinyStories2-ascii-bpe-2k\"\n\nMore Information needed"
] |
401569d7824b099a804e1b2b8e924b2a8c5a48bb
|
# Dataset Card for "instructionPairedFormularDataset13k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
crewdon/instructionPairedFormularDataset13k
|
[
"region:us"
] |
2023-09-22T23:19:09+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3190559, "num_examples": 13655}], "download_size": 1482698, "dataset_size": 3190559}}
|
2023-09-22T23:19:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "instructionPairedFormularDataset13k"
More Information needed
|
[
"# Dataset Card for \"instructionPairedFormularDataset13k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"instructionPairedFormularDataset13k\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"instructionPairedFormularDataset13k\"\n\nMore Information needed"
] |
97f0f599d2c6f8f860266a1fc64d80b42972f2c9
|
# Dataset Card
This dataset contains image caption pairs for logo designs screped from logobook.com. It is created for my research project to finetune text-image diffusion models with logo designs.
Logobook.com has a very nice logo archive consisting of modernist and simplistic logo designs. Each design stored along with some keywords. I used these keywords to create a caption for the logo designs.
See example below:

Caption:
Adams Law, a prominent law firm in Ireland, features a sleek and professional logo design by Jeremy Simmons of Process. The logo showcases a symbolic letter 'A' enclosed within a circular frame, representing unity and integrity. The inclusion of the word 'Ireland' emphasizes the firm's local expertise and dedication to serving the Irish community. A subtle quotation mark adds a touch of elegance and sophistication, reflecting Adams Law's commitment to delivering impactful legal solutions. This timeless logo design, created in 2017, effectively captures the firm's professionalism and legal expertise.
## Copyright disclaimer
Created and used for research purposes.
|
mozci/logobookDB
|
[
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:afl-3.0",
"brand",
"logo",
"design",
"graphic design",
"region:us"
] |
2023-09-22T23:29:14+00:00
|
{"language": ["en"], "license": "afl-3.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image"], "pretty_name": "Logobook Archive with Captions", "tags": ["brand", "logo", "design", "graphic design"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 162614866.176, "num_examples": 4026}], "download_size": 139569721, "dataset_size": 162614866.176}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T01:15:39+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-to-image #size_categories-1K<n<10K #language-English #license-afl-3.0 #brand #logo #design #graphic design #region-us
|
# Dataset Card
This dataset contains image caption pairs for logo designs screped from URL. It is created for my research project to finetune text-image diffusion models with logo designs.
URL has a very nice logo archive consisting of modernist and simplistic logo designs. Each design stored along with some keywords. I used these keywords to create a caption for the logo designs.
See example below:
!image/jpeg
Caption:
Adams Law, a prominent law firm in Ireland, features a sleek and professional logo design by Jeremy Simmons of Process. The logo showcases a symbolic letter 'A' enclosed within a circular frame, representing unity and integrity. The inclusion of the word 'Ireland' emphasizes the firm's local expertise and dedication to serving the Irish community. A subtle quotation mark adds a touch of elegance and sophistication, reflecting Adams Law's commitment to delivering impactful legal solutions. This timeless logo design, created in 2017, effectively captures the firm's professionalism and legal expertise.
## Copyright disclaimer
Created and used for research purposes.
|
[
"# Dataset Card\n\n\nThis dataset contains image caption pairs for logo designs screped from URL. It is created for my research project to finetune text-image diffusion models with logo designs.\n\nURL has a very nice logo archive consisting of modernist and simplistic logo designs. Each design stored along with some keywords. I used these keywords to create a caption for the logo designs.\n\nSee example below:\n\n!image/jpeg\n\nCaption:\n\nAdams Law, a prominent law firm in Ireland, features a sleek and professional logo design by Jeremy Simmons of Process. The logo showcases a symbolic letter 'A' enclosed within a circular frame, representing unity and integrity. The inclusion of the word 'Ireland' emphasizes the firm's local expertise and dedication to serving the Irish community. A subtle quotation mark adds a touch of elegance and sophistication, reflecting Adams Law's commitment to delivering impactful legal solutions. This timeless logo design, created in 2017, effectively captures the firm's professionalism and legal expertise.",
"## Copyright disclaimer\nCreated and used for research purposes."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-1K<n<10K #language-English #license-afl-3.0 #brand #logo #design #graphic design #region-us \n",
"# Dataset Card\n\n\nThis dataset contains image caption pairs for logo designs screped from URL. It is created for my research project to finetune text-image diffusion models with logo designs.\n\nURL has a very nice logo archive consisting of modernist and simplistic logo designs. Each design stored along with some keywords. I used these keywords to create a caption for the logo designs.\n\nSee example below:\n\n!image/jpeg\n\nCaption:\n\nAdams Law, a prominent law firm in Ireland, features a sleek and professional logo design by Jeremy Simmons of Process. The logo showcases a symbolic letter 'A' enclosed within a circular frame, representing unity and integrity. The inclusion of the word 'Ireland' emphasizes the firm's local expertise and dedication to serving the Irish community. A subtle quotation mark adds a touch of elegance and sophistication, reflecting Adams Law's commitment to delivering impactful legal solutions. This timeless logo design, created in 2017, effectively captures the firm's professionalism and legal expertise.",
"## Copyright disclaimer\nCreated and used for research purposes."
] |
[
51,
242,
14
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-1K<n<10K #language-English #license-afl-3.0 #brand #logo #design #graphic design #region-us \n# Dataset Card\n\n\nThis dataset contains image caption pairs for logo designs screped from URL. It is created for my research project to finetune text-image diffusion models with logo designs.\n\nURL has a very nice logo archive consisting of modernist and simplistic logo designs. Each design stored along with some keywords. I used these keywords to create a caption for the logo designs.\n\nSee example below:\n\n!image/jpeg\n\nCaption:\n\nAdams Law, a prominent law firm in Ireland, features a sleek and professional logo design by Jeremy Simmons of Process. The logo showcases a symbolic letter 'A' enclosed within a circular frame, representing unity and integrity. The inclusion of the word 'Ireland' emphasizes the firm's local expertise and dedication to serving the Irish community. A subtle quotation mark adds a touch of elegance and sophistication, reflecting Adams Law's commitment to delivering impactful legal solutions. This timeless logo design, created in 2017, effectively captures the firm's professionalism and legal expertise.## Copyright disclaimer\nCreated and used for research purposes."
] |
301775b4806162dc48045462123c83dddaf43b8d
|
# Dataset Card for "hh-generated_flan_t5_large_flan_t5_small_zeroshot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dongyoung4091/hh-generated_flan_t5_large_flan_t5_small_zeroshot
|
[
"region:us"
] |
2023-09-22T23:42:04+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "zeroshot_helpfulness", "dtype": "float64"}, {"name": "zeroshot_specificity", "dtype": "float64"}, {"name": "zeroshot_intent", "dtype": "float64"}, {"name": "zeroshot_factuality", "dtype": "float64"}, {"name": "zeroshot_easy-to-understand", "dtype": "float64"}, {"name": "zeroshot_relevance", "dtype": "float64"}, {"name": "zeroshot_readability", "dtype": "float64"}, {"name": "zeroshot_enough-detail", "dtype": "float64"}, {"name": "zeroshot_biased:", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-individual-preferences", "dtype": "float64"}, {"name": "zeroshot_repetetive", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-context", "dtype": "float64"}, {"name": "zeroshot_too-long", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 6336357, "num_examples": 25600}], "download_size": 726503, "dataset_size": 6336357}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T23:42:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "hh-generated_flan_t5_large_flan_t5_small_zeroshot"
More Information needed
|
[
"# Dataset Card for \"hh-generated_flan_t5_large_flan_t5_small_zeroshot\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"hh-generated_flan_t5_large_flan_t5_small_zeroshot\"\n\nMore Information needed"
] |
[
6,
33
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"hh-generated_flan_t5_large_flan_t5_small_zeroshot\"\n\nMore Information needed"
] |
fb7190fdc564d94a8c80cf38d072897024302a98
|
# Dataset Card for "hh-rlhf_with_features_flan_t5_large_flan_t5_small_zeroshot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dongyoung4091/hh-rlhf_with_features_flan_t5_large_flan_t5_small_zeroshot
|
[
"region:us"
] |
2023-09-22T23:42:11+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}, {"name": "helpfulness_chosen", "dtype": "int64"}, {"name": "helpfulness_rejected", "dtype": "int64"}, {"name": "specificity_chosen", "dtype": "int64"}, {"name": "specificity_rejected", "dtype": "int64"}, {"name": "intent_chosen", "dtype": "int64"}, {"name": "intent_rejected", "dtype": "int64"}, {"name": "factuality_chosen", "dtype": "int64"}, {"name": "factuality_rejected", "dtype": "int64"}, {"name": "easy-to-understand_chosen", "dtype": "int64"}, {"name": "easy-to-understand_rejected", "dtype": "int64"}, {"name": "relevance_chosen", "dtype": "int64"}, {"name": "relevance_rejected", "dtype": "int64"}, {"name": "readability_chosen", "dtype": "int64"}, {"name": "readability_rejected", "dtype": "int64"}, {"name": "enough-detail_chosen", "dtype": "int64"}, {"name": "enough-detail_rejected", "dtype": "int64"}, {"name": "biased:_chosen", "dtype": "int64"}, {"name": "biased:_rejected", "dtype": "int64"}, {"name": "fail-to-consider-individual-preferences_chosen", "dtype": "int64"}, {"name": "fail-to-consider-individual-preferences_rejected", "dtype": "int64"}, {"name": "repetetive_chosen", "dtype": "int64"}, {"name": "repetetive_rejected", "dtype": "int64"}, {"name": "fail-to-consider-context_chosen", "dtype": "int64"}, {"name": "fail-to-consider-context_rejected", "dtype": "int64"}, {"name": "too-long_chosen", "dtype": "int64"}, {"name": "too-long_rejected", "dtype": "int64"}, {"name": "human", "dtype": "string"}, {"name": "assistant_chosen", "dtype": "string"}, {"name": "assistant_rejected", "dtype": "string"}, {"name": "log_score_chosen", "dtype": "float64"}, {"name": "log_score_rejected", "dtype": "float64"}, {"name": "labels", "dtype": "string"}, {"name": "zeroshot_helpfulness_chosen", "dtype": "float64"}, {"name": "zeroshot_helpfulness_rejected", "dtype": "float64"}, {"name": "zeroshot_specificity_chosen", "dtype": "float64"}, {"name": "zeroshot_specificity_rejected", "dtype": "float64"}, {"name": "zeroshot_intent_chosen", "dtype": "float64"}, {"name": "zeroshot_intent_rejected", "dtype": "float64"}, {"name": "zeroshot_factuality_chosen", "dtype": "float64"}, {"name": "zeroshot_factuality_rejected", "dtype": "float64"}, {"name": "zeroshot_easy-to-understand_chosen", "dtype": "float64"}, {"name": "zeroshot_easy-to-understand_rejected", "dtype": "float64"}, {"name": "zeroshot_relevance_chosen", "dtype": "float64"}, {"name": "zeroshot_relevance_rejected", "dtype": "float64"}, {"name": "zeroshot_readability_chosen", "dtype": "float64"}, {"name": "zeroshot_readability_rejected", "dtype": "float64"}, {"name": "zeroshot_enough-detail_chosen", "dtype": "float64"}, {"name": "zeroshot_enough-detail_rejected", "dtype": "float64"}, {"name": "zeroshot_biased:_chosen", "dtype": "float64"}, {"name": "zeroshot_biased:_rejected", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-individual-preferences_chosen", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-individual-preferences_rejected", "dtype": "float64"}, {"name": "zeroshot_repetetive_chosen", "dtype": "float64"}, {"name": "zeroshot_repetetive_rejected", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-context_chosen", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-context_rejected", "dtype": "float64"}, {"name": "zeroshot_too-long_chosen", "dtype": "float64"}, {"name": "zeroshot_too-long_rejected", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 16425816, "num_examples": 9574}, {"name": "test", "num_bytes": 16369741, "num_examples": 9574}], "download_size": 15963958, "dataset_size": 32795557}}
|
2023-09-22T23:43:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "hh-rlhf_with_features_flan_t5_large_flan_t5_small_zeroshot"
More Information needed
|
[
"# Dataset Card for \"hh-rlhf_with_features_flan_t5_large_flan_t5_small_zeroshot\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"hh-rlhf_with_features_flan_t5_large_flan_t5_small_zeroshot\"\n\nMore Information needed"
] |
[
6,
39
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"hh-rlhf_with_features_flan_t5_large_flan_t5_small_zeroshot\"\n\nMore Information needed"
] |
0de7fa6ae05346fc3e73c9139a2015a2c82c35cb
|
# Dataset Card for "shp-generated_flan_t5_large_flan_t5_small_zeroshot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dongyoung4091/shp-generated_flan_t5_large_flan_t5_small_zeroshot
|
[
"region:us"
] |
2023-09-22T23:43:12+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "zeroshot_helpfulness", "dtype": "float64"}, {"name": "zeroshot_specificity", "dtype": "float64"}, {"name": "zeroshot_intent", "dtype": "float64"}, {"name": "zeroshot_factuality", "dtype": "float64"}, {"name": "zeroshot_easy-to-understand", "dtype": "float64"}, {"name": "zeroshot_relevance", "dtype": "float64"}, {"name": "zeroshot_readability", "dtype": "float64"}, {"name": "zeroshot_enough-detail", "dtype": "float64"}, {"name": "zeroshot_biased:", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-individual-preferences", "dtype": "float64"}, {"name": "zeroshot_repetetive", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-context", "dtype": "float64"}, {"name": "zeroshot_too-long", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 29493865, "num_examples": 25600}], "download_size": 1808580, "dataset_size": 29493865}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T23:45:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "shp-generated_flan_t5_large_flan_t5_small_zeroshot"
More Information needed
|
[
"# Dataset Card for \"shp-generated_flan_t5_large_flan_t5_small_zeroshot\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"shp-generated_flan_t5_large_flan_t5_small_zeroshot\"\n\nMore Information needed"
] |
[
6,
34
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"shp-generated_flan_t5_large_flan_t5_small_zeroshot\"\n\nMore Information needed"
] |
97ab132cca6fc3eca66ce5237eb4dceaf59a885c
|
# Dataset Card for "shp_with_features_20k_flan_t5_large_flan_t5_small_zeroshot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dongyoung4091/shp_with_features_20k_flan_t5_large_flan_t5_small_zeroshot
|
[
"region:us"
] |
2023-09-22T23:45:14+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "post_id", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "upvote_ratio", "dtype": "float64"}, {"name": "history", "dtype": "string"}, {"name": "c_root_id_A", "dtype": "string"}, {"name": "c_root_id_B", "dtype": "string"}, {"name": "created_at_utc_A", "dtype": "int64"}, {"name": "created_at_utc_B", "dtype": "int64"}, {"name": "score_A", "dtype": "int64"}, {"name": "score_B", "dtype": "int64"}, {"name": "human_ref_A", "dtype": "string"}, {"name": "human_ref_B", "dtype": "string"}, {"name": "labels", "dtype": "int64"}, {"name": "seconds_difference", "dtype": "float64"}, {"name": "score_ratio", "dtype": "float64"}, {"name": "helpfulness_A", "dtype": "float64"}, {"name": "helpfulness_B", "dtype": "float64"}, {"name": "specificity_A", "dtype": "float64"}, {"name": "specificity_B", "dtype": "float64"}, {"name": "intent_A", "dtype": "float64"}, {"name": "intent_B", "dtype": "float64"}, {"name": "factuality_A", "dtype": "float64"}, {"name": "factuality_B", "dtype": "float64"}, {"name": "easy-to-understand_A", "dtype": "float64"}, {"name": "easy-to-understand_B", "dtype": "float64"}, {"name": "relevance_A", "dtype": "float64"}, {"name": "relevance_B", "dtype": "float64"}, {"name": "readability_A", "dtype": "float64"}, {"name": "readability_B", "dtype": "float64"}, {"name": "enough-detail_A", "dtype": "float64"}, {"name": "enough-detail_B", "dtype": "float64"}, {"name": "biased:_A", "dtype": "float64"}, {"name": "biased:_B", "dtype": "float64"}, {"name": "fail-to-consider-individual-preferences_A", "dtype": "float64"}, {"name": "fail-to-consider-individual-preferences_B", "dtype": "float64"}, {"name": "repetetive_A", "dtype": "float64"}, {"name": "repetetive_B", "dtype": "float64"}, {"name": "fail-to-consider-context_A", "dtype": "float64"}, {"name": "fail-to-consider-context_B", "dtype": "float64"}, {"name": "too-long_A", "dtype": "float64"}, {"name": "too-long_B", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}, {"name": "log_score_A", "dtype": "float64"}, {"name": "log_score_B", "dtype": "float64"}, {"name": "zeroshot_helpfulness_A", "dtype": "float64"}, {"name": "zeroshot_helpfulness_B", "dtype": "float64"}, {"name": "zeroshot_specificity_A", "dtype": "float64"}, {"name": "zeroshot_specificity_B", "dtype": "float64"}, {"name": "zeroshot_intent_A", "dtype": "float64"}, {"name": "zeroshot_intent_B", "dtype": "float64"}, {"name": "zeroshot_factuality_A", "dtype": "float64"}, {"name": "zeroshot_factuality_B", "dtype": "float64"}, {"name": "zeroshot_easy-to-understand_A", "dtype": "float64"}, {"name": "zeroshot_easy-to-understand_B", "dtype": "float64"}, {"name": "zeroshot_relevance_A", "dtype": "float64"}, {"name": "zeroshot_relevance_B", "dtype": "float64"}, {"name": "zeroshot_readability_A", "dtype": "float64"}, {"name": "zeroshot_readability_B", "dtype": "float64"}, {"name": "zeroshot_enough-detail_A", "dtype": "float64"}, {"name": "zeroshot_enough-detail_B", "dtype": "float64"}, {"name": "zeroshot_biased:_A", "dtype": "float64"}, {"name": "zeroshot_biased:_B", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-individual-preferences_A", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-individual-preferences_B", "dtype": "float64"}, {"name": "zeroshot_repetetive_A", "dtype": "float64"}, {"name": "zeroshot_repetetive_B", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-context_A", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-context_B", "dtype": "float64"}, {"name": "zeroshot_too-long_A", "dtype": "float64"}, {"name": "zeroshot_too-long_B", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 22674534, "num_examples": 9459}, {"name": "test", "num_bytes": 22627412, "num_examples": 9459}], "download_size": 24128568, "dataset_size": 45301946}}
|
2023-09-22T23:47:12+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "shp_with_features_20k_flan_t5_large_flan_t5_small_zeroshot"
More Information needed
|
[
"# Dataset Card for \"shp_with_features_20k_flan_t5_large_flan_t5_small_zeroshot\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"shp_with_features_20k_flan_t5_large_flan_t5_small_zeroshot\"\n\nMore Information needed"
] |
[
6,
39
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"shp_with_features_20k_flan_t5_large_flan_t5_small_zeroshot\"\n\nMore Information needed"
] |
20a6f61cc7e5440f719fbeab0a7d43c8bc29ae6b
|
# Dataset Card for "9537a11b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/9537a11b
|
[
"region:us"
] |
2023-09-22T23:58:25+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 168, "num_examples": 10}], "download_size": 1356, "dataset_size": 168}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T23:58:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "9537a11b"
More Information needed
|
[
"# Dataset Card for \"9537a11b\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"9537a11b\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"9537a11b\"\n\nMore Information needed"
] |
030b0617e85687e635aee38645d0d6667de79c31
|
# Dataset Card for "GSM8K_Test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sarahpann/GSM8K_Test
|
[
"region:us"
] |
2023-09-23T00:06:49+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "answers", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 109109, "num_examples": 200}], "download_size": 64938, "dataset_size": 109109}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
|
2023-09-23T19:09:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "GSM8K_Test"
More Information needed
|
[
"# Dataset Card for \"GSM8K_Test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"GSM8K_Test\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"GSM8K_Test\"\n\nMore Information needed"
] |
420f4ea13985fdff9e48700eae46514f1a972ac0
|
# Dataset Card for "new-eng-mai"
Newari(new) and maithali(mai) are indigenous languages of Nepal. This dataset contains language translation of english sentence into both newari and maithali.
These datas are scrapped from internet and various facebook posts.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Unspoiled-Egg/indigenous-eng-translation
|
[
"language:en",
"language:new",
"language:mai",
"region:us"
] |
2023-09-23T00:09:56+00:00
|
{"language": ["en", "new", "mai"], "dataset_info": {"features": [{"name": "translation", "struct": [{"name": "en", "dtype": "string"}, {"name": "mai", "dtype": "string"}, {"name": "new", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 700903, "num_examples": 2494}], "download_size": 323847, "dataset_size": 700903}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-23T00:22:25+00:00
|
[] |
[
"en",
"new",
"mai"
] |
TAGS
#language-English #language-Newari #language-Maithili #region-us
|
# Dataset Card for "new-eng-mai"
Newari(new) and maithali(mai) are indigenous languages of Nepal. This dataset contains language translation of english sentence into both newari and maithali.
These datas are scrapped from internet and various facebook posts.
More Information needed
|
[
"# Dataset Card for \"new-eng-mai\"\nNewari(new) and maithali(mai) are indigenous languages of Nepal. This dataset contains language translation of english sentence into both newari and maithali.\nThese datas are scrapped from internet and various facebook posts.\n\nMore Information needed"
] |
[
"TAGS\n#language-English #language-Newari #language-Maithili #region-us \n",
"# Dataset Card for \"new-eng-mai\"\nNewari(new) and maithali(mai) are indigenous languages of Nepal. This dataset contains language translation of english sentence into both newari and maithali.\nThese datas are scrapped from internet and various facebook posts.\n\nMore Information needed"
] |
[
21,
66
] |
[
"passage: TAGS\n#language-English #language-Newari #language-Maithili #region-us \n# Dataset Card for \"new-eng-mai\"\nNewari(new) and maithali(mai) are indigenous languages of Nepal. This dataset contains language translation of english sentence into both newari and maithali.\nThese datas are scrapped from internet and various facebook posts.\n\nMore Information needed"
] |
8b51ddfeed8cb75c7c0eac50ba89e9d2c6c5d4f4
|
# Open access text-books
A collection of open-access text-books with associated additional files (exercises, answersheets, code samples, etc).
Downloaded from various online sources, mainly from UMT.
Total books: 392
Total tokens: 136,088,716
Base tokens: 87,874,318
Additional tokens: 48,214,398
|
benxh/open-access-books-v1
|
[
"size_categories:n<1K",
"language:en",
"books",
"open access",
"region:us"
] |
2023-09-23T00:13:33+00:00
|
{"language": ["en"], "size_categories": ["n<1K"], "tags": ["books", "open access"]}
|
2023-09-23T00:28:06+00:00
|
[] |
[
"en"
] |
TAGS
#size_categories-n<1K #language-English #books #open access #region-us
|
# Open access text-books
A collection of open-access text-books with associated additional files (exercises, answersheets, code samples, etc).
Downloaded from various online sources, mainly from UMT.
Total books: 392
Total tokens: 136,088,716
Base tokens: 87,874,318
Additional tokens: 48,214,398
|
[
"# Open access text-books\n\nA collection of open-access text-books with associated additional files (exercises, answersheets, code samples, etc).\n\nDownloaded from various online sources, mainly from UMT.\n\nTotal books: 392\n\nTotal tokens: 136,088,716\n\nBase tokens: 87,874,318\n\nAdditional tokens: 48,214,398"
] |
[
"TAGS\n#size_categories-n<1K #language-English #books #open access #region-us \n",
"# Open access text-books\n\nA collection of open-access text-books with associated additional files (exercises, answersheets, code samples, etc).\n\nDownloaded from various online sources, mainly from UMT.\n\nTotal books: 392\n\nTotal tokens: 136,088,716\n\nBase tokens: 87,874,318\n\nAdditional tokens: 48,214,398"
] |
[
25,
83
] |
[
"passage: TAGS\n#size_categories-n<1K #language-English #books #open access #region-us \n# Open access text-books\n\nA collection of open-access text-books with associated additional files (exercises, answersheets, code samples, etc).\n\nDownloaded from various online sources, mainly from UMT.\n\nTotal books: 392\n\nTotal tokens: 136,088,716\n\nBase tokens: 87,874,318\n\nAdditional tokens: 48,214,398"
] |
6683460368a573574ab28c0a5d32d3eca80eae64
|
# darija-reviews
**Description**: This small dataset consists of product and service reviews from social media written in Darija, encompassing both Arabic and Arabizi writing styles.
Each review is categorized by its polarity (positive, negative, or neutral) and includes information about the domain or topic, as well as the writing style of the content.
The dataset encompasses a broad spectrum of topics, including clothing, cosmetics, entertainment, hospitality, IT, and other domains.
**Size**: The dataset contains a total of 851 reviews.
**Labels**:
- Sentiment Labels: Positive, Negative, Neutral
- Topics: Automotive, Cleaning, Clothing, Cosmetics, Entertainment, Hospitality, Household Appliances, IT, Jewelry, Restaurants, and Other.
- Writing Styles: Arabic & Arabizi
**Usage**: This dataset can be used for ***evaluating*** the performance of sentiment analysis models in classifying the polarity (positive, negative, or neutral) of product and service reviews written in Darija.
**Examples**:
- Positive IT Review (Arabic WS): "هاتف زوين عجبني"
- Negative Cosmetics Review (Arabizi WS): "Lwa7iid li fach kandirou wejhi kayt9lab blhboub manhar 9te3to ou lhbob hbso hada maysla7ch lbchra dohniya myidem bzzaf"
- Neutral Cleaning Review (Arabic WS): "خليه شوية على الطبيع ومسحي بشيفون فازك"
|
ohidaoui/darija-reviews
|
[
"task_categories:text-classification",
"size_categories:n<1K",
"region:us"
] |
2023-09-23T00:35:59+00:00
|
{"size_categories": ["n<1K"], "task_categories": ["text-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "topic", "dtype": "string"}, {"name": "writing_style", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 109030, "num_examples": 851}], "download_size": 50837, "dataset_size": 109030}}
|
2023-09-23T00:57:07+00:00
|
[] |
[] |
TAGS
#task_categories-text-classification #size_categories-n<1K #region-us
|
# darija-reviews
Description: This small dataset consists of product and service reviews from social media written in Darija, encompassing both Arabic and Arabizi writing styles.
Each review is categorized by its polarity (positive, negative, or neutral) and includes information about the domain or topic, as well as the writing style of the content.
The dataset encompasses a broad spectrum of topics, including clothing, cosmetics, entertainment, hospitality, IT, and other domains.
Size: The dataset contains a total of 851 reviews.
Labels:
- Sentiment Labels: Positive, Negative, Neutral
- Topics: Automotive, Cleaning, Clothing, Cosmetics, Entertainment, Hospitality, Household Appliances, IT, Jewelry, Restaurants, and Other.
- Writing Styles: Arabic & Arabizi
Usage: This dataset can be used for *evaluating* the performance of sentiment analysis models in classifying the polarity (positive, negative, or neutral) of product and service reviews written in Darija.
Examples:
- Positive IT Review (Arabic WS): "هاتف زوين عجبني"
- Negative Cosmetics Review (Arabizi WS): "Lwa7iid li fach kandirou wejhi kayt9lab blhboub manhar 9te3to ou lhbob hbso hada maysla7ch lbchra dohniya myidem bzzaf"
- Neutral Cleaning Review (Arabic WS): "خليه شوية على الطبيع ومسحي بشيفون فازك"
|
[
"# darija-reviews\n\n\nDescription: This small dataset consists of product and service reviews from social media written in Darija, encompassing both Arabic and Arabizi writing styles. \nEach review is categorized by its polarity (positive, negative, or neutral) and includes information about the domain or topic, as well as the writing style of the content. \nThe dataset encompasses a broad spectrum of topics, including clothing, cosmetics, entertainment, hospitality, IT, and other domains.\n\nSize: The dataset contains a total of 851 reviews.\n\nLabels:\n\n- Sentiment Labels: Positive, Negative, Neutral \n- Topics: Automotive, Cleaning, Clothing, Cosmetics, Entertainment, Hospitality, Household Appliances, IT, Jewelry, Restaurants, and Other. \n- Writing Styles: Arabic & Arabizi \n\nUsage: This dataset can be used for *evaluating* the performance of sentiment analysis models in classifying the polarity (positive, negative, or neutral) of product and service reviews written in Darija.\n\n \nExamples:\n\n- Positive IT Review (Arabic WS): \"هاتف زوين عجبني\" \n- Negative Cosmetics Review (Arabizi WS): \"Lwa7iid li fach kandirou wejhi kayt9lab blhboub manhar 9te3to ou lhbob hbso hada maysla7ch lbchra dohniya myidem bzzaf\" \n- Neutral Cleaning Review (Arabic WS): \"خليه شوية على الطبيع ومسحي بشيفون فازك\""
] |
[
"TAGS\n#task_categories-text-classification #size_categories-n<1K #region-us \n",
"# darija-reviews\n\n\nDescription: This small dataset consists of product and service reviews from social media written in Darija, encompassing both Arabic and Arabizi writing styles. \nEach review is categorized by its polarity (positive, negative, or neutral) and includes information about the domain or topic, as well as the writing style of the content. \nThe dataset encompasses a broad spectrum of topics, including clothing, cosmetics, entertainment, hospitality, IT, and other domains.\n\nSize: The dataset contains a total of 851 reviews.\n\nLabels:\n\n- Sentiment Labels: Positive, Negative, Neutral \n- Topics: Automotive, Cleaning, Clothing, Cosmetics, Entertainment, Hospitality, Household Appliances, IT, Jewelry, Restaurants, and Other. \n- Writing Styles: Arabic & Arabizi \n\nUsage: This dataset can be used for *evaluating* the performance of sentiment analysis models in classifying the polarity (positive, negative, or neutral) of product and service reviews written in Darija.\n\n \nExamples:\n\n- Positive IT Review (Arabic WS): \"هاتف زوين عجبني\" \n- Negative Cosmetics Review (Arabizi WS): \"Lwa7iid li fach kandirou wejhi kayt9lab blhboub manhar 9te3to ou lhbob hbso hada maysla7ch lbchra dohniya myidem bzzaf\" \n- Neutral Cleaning Review (Arabic WS): \"خليه شوية على الطبيع ومسحي بشيفون فازك\""
] |
[
27,
352
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-n<1K #region-us \n# darija-reviews\n\n\nDescription: This small dataset consists of product and service reviews from social media written in Darija, encompassing both Arabic and Arabizi writing styles. \nEach review is categorized by its polarity (positive, negative, or neutral) and includes information about the domain or topic, as well as the writing style of the content. \nThe dataset encompasses a broad spectrum of topics, including clothing, cosmetics, entertainment, hospitality, IT, and other domains.\n\nSize: The dataset contains a total of 851 reviews.\n\nLabels:\n\n- Sentiment Labels: Positive, Negative, Neutral \n- Topics: Automotive, Cleaning, Clothing, Cosmetics, Entertainment, Hospitality, Household Appliances, IT, Jewelry, Restaurants, and Other. \n- Writing Styles: Arabic & Arabizi \n\nUsage: This dataset can be used for *evaluating* the performance of sentiment analysis models in classifying the polarity (positive, negative, or neutral) of product and service reviews written in Darija.\n\n \nExamples:\n\n- Positive IT Review (Arabic WS): \"هاتف زوين عجبني\" \n- Negative Cosmetics Review (Arabizi WS): \"Lwa7iid li fach kandirou wejhi kayt9lab blhboub manhar 9te3to ou lhbob hbso hada maysla7ch lbchra dohniya myidem bzzaf\" \n- Neutral Cleaning Review (Arabic WS): \"خليه شوية على الطبيع ومسحي بشيفون فازك\""
] |
6de5d65fe209b66218b7b90ceae85b220085a463
|
# Dataset Card for "Collective Cognition ChatGPT Conversations"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
### Dataset Summary
The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.
Visit Collective Cognition to browse or contribute chats here: https://collectivecognition.ai
### Supported Tasks
- **Language Model Finetuning**: Training through behavior cloning and knowledge instilling.
- **Dialogue Understanding**: Analyzing the flow and structure of conversations.
- **Sentiment Analysis**: Understanding the sentiment or emotion in the dialogues.
- **Topic Modeling**: Determining the topics discussed within the chats.
### Languages
The primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.
## Dataset Structure
### Data Instances
A typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either "human" or "gpt").
The dataset is formatted to conform with ShareGPT standard.
This is an example entry in the dataset, noting that turns are variable and can be many or as few as two:
```json
{
"title": "Interest Formulas Clarification",
"model_name": "GPT-4",
"custom_instruction": false,
"conversations": [
{
"from": "human",
"value": "Is this correct? The formula for calculating simple interest (also known as Compound Interest) is given by I = P * r^n, where \n- I represents the final amount of money, also called principal or capital. This value will keep on increasing at a fixed rate per unit time denoted by 'r' and number of times denoted by 'n'. The initial investment represented by 'P', while the percentage rate expressed as an annual fraction, i.e., p/100 representing the percentage to be paid annually. In other words it\u2019s how much you earn in one year out of your original $100 invested with a 5% annual return compounded monthly."
},
{
"from": "assistant",
"value": "I'm sorry, but there seems to be some confusion in your question. The formula given and the definitions of the variables seem to be mixed up between simple interest and compound interest. Let me clarify:\n\n1. Simple Interest: This type of interest is calculated only on the original amount (principal) that you invested or loaned. The formula for simple interest is I = PRT, where:\n - I is the interest\n - P is the principal amount (initial investment)\n - R is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - T is the time the money is invested or borrowed for, in years.\n\n2. Compound Interest: This type of interest is calculated on the initial principal and also on the accumulated interest of previous periods of a deposit or loan. The formula for compound interest is A = P(1 + r/n)^(nt), where:\n - A is the amount of money accumulated after n years, including interest.\n - P is the principal amount (the initial amount of money)\n - r is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - n is the number of times that interest is compounded per year\n - t is the number of years the money is invested or borrowed for.\n\nIn both cases, the interest rate is usually expressed as an annual rate, regardless of the length of the term."
}
]
}
```
### Data Splits
Currently, the dataset is not divided into specific splits (train, test, validation).
## Dataset Creation
### Curation Rationale
The dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.
### Source Data
The data originates from user contributions on the "Collective Cognition" website.
### Personal and Sensitive Information
All chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email [email protected]
## Considerations for Using the Data
### Social Impact of Dataset
The dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.
### Discussion of Biases
There might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.
### Other Known Limitations
The dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.
## Additional Information
### Licensing Information
MIT
|
CollectiveCognition/chats-data-2023-09-22
|
[
"license:mit",
"region:us"
] |
2023-09-23T00:40:24+00:00
|
{"license": "mit"}
|
2023-09-23T01:07:18+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
# Dataset Card for "Collective Cognition ChatGPT Conversations"
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
### Dataset Summary
The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.
Visit Collective Cognition to browse or contribute chats here: URL
### Supported Tasks
- Language Model Finetuning: Training through behavior cloning and knowledge instilling.
- Dialogue Understanding: Analyzing the flow and structure of conversations.
- Sentiment Analysis: Understanding the sentiment or emotion in the dialogues.
- Topic Modeling: Determining the topics discussed within the chats.
### Languages
The primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.
## Dataset Structure
### Data Instances
A typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either "human" or "gpt").
The dataset is formatted to conform with ShareGPT standard.
This is an example entry in the dataset, noting that turns are variable and can be many or as few as two:
### Data Splits
Currently, the dataset is not divided into specific splits (train, test, validation).
## Dataset Creation
### Curation Rationale
The dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.
### Source Data
The data originates from user contributions on the "Collective Cognition" website.
### Personal and Sensitive Information
All chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@URL
## Considerations for Using the Data
### Social Impact of Dataset
The dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.
### Discussion of Biases
There might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.
### Other Known Limitations
The dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.
## Additional Information
### Licensing Information
MIT
|
[
"# Dataset Card for \"Collective Cognition ChatGPT Conversations\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description",
"### Dataset Summary\nThe \"Collective Cognition ChatGPT Conversations\" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the \"Collective Cognition\" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.\n\nVisit Collective Cognition to browse or contribute chats here: URL",
"### Supported Tasks\n- Language Model Finetuning: Training through behavior cloning and knowledge instilling.\n- Dialogue Understanding: Analyzing the flow and structure of conversations.\n- Sentiment Analysis: Understanding the sentiment or emotion in the dialogues.\n- Topic Modeling: Determining the topics discussed within the chats.",
"### Languages\nThe primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.",
"## Dataset Structure",
"### Data Instances\nA typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either \"human\" or \"gpt\").\n\nThe dataset is formatted to conform with ShareGPT standard.\n\nThis is an example entry in the dataset, noting that turns are variable and can be many or as few as two:",
"### Data Splits\nCurrently, the dataset is not divided into specific splits (train, test, validation).",
"## Dataset Creation",
"### Curation Rationale\nThe dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.",
"### Source Data\nThe data originates from user contributions on the \"Collective Cognition\" website.",
"### Personal and Sensitive Information\nAll chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@URL",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nThe dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.",
"### Discussion of Biases\nThere might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.",
"### Other Known Limitations\nThe dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.",
"## Additional Information",
"### Licensing Information\nMIT"
] |
[
"TAGS\n#license-mit #region-us \n",
"# Dataset Card for \"Collective Cognition ChatGPT Conversations\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description",
"### Dataset Summary\nThe \"Collective Cognition ChatGPT Conversations\" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the \"Collective Cognition\" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.\n\nVisit Collective Cognition to browse or contribute chats here: URL",
"### Supported Tasks\n- Language Model Finetuning: Training through behavior cloning and knowledge instilling.\n- Dialogue Understanding: Analyzing the flow and structure of conversations.\n- Sentiment Analysis: Understanding the sentiment or emotion in the dialogues.\n- Topic Modeling: Determining the topics discussed within the chats.",
"### Languages\nThe primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.",
"## Dataset Structure",
"### Data Instances\nA typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either \"human\" or \"gpt\").\n\nThe dataset is formatted to conform with ShareGPT standard.\n\nThis is an example entry in the dataset, noting that turns are variable and can be many or as few as two:",
"### Data Splits\nCurrently, the dataset is not divided into specific splits (train, test, validation).",
"## Dataset Creation",
"### Curation Rationale\nThe dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.",
"### Source Data\nThe data originates from user contributions on the \"Collective Cognition\" website.",
"### Personal and Sensitive Information\nAll chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@URL",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nThe dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.",
"### Discussion of Biases\nThere might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.",
"### Other Known Limitations\nThe dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.",
"## Additional Information",
"### Licensing Information\nMIT"
] |
[
11,
19,
116,
4,
107,
75,
32,
6,
99,
28,
5,
38,
24,
52,
8,
40,
55,
42,
5,
7
] |
[
"passage: TAGS\n#license-mit #region-us \n# Dataset Card for \"Collective Cognition ChatGPT Conversations\"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information## Dataset Description### Dataset Summary\nThe \"Collective Cognition ChatGPT Conversations\" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the \"Collective Cognition\" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.\n\nVisit Collective Cognition to browse or contribute chats here: URL### Supported Tasks\n- Language Model Finetuning: Training through behavior cloning and knowledge instilling.\n- Dialogue Understanding: Analyzing the flow and structure of conversations.\n- Sentiment Analysis: Understanding the sentiment or emotion in the dialogues.\n- Topic Modeling: Determining the topics discussed within the chats.### Languages\nThe primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.## Dataset Structure### Data Instances\nA typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either \"human\" or \"gpt\").\n\nThe dataset is formatted to conform with ShareGPT standard.\n\nThis is an example entry in the dataset, noting that turns are variable and can be many or as few as two:### Data Splits\nCurrently, the dataset is not divided into specific splits (train, test, validation).## Dataset Creation"
] |
cf97e787ee7abdaba6d6c49ea661df046fca8208
|
# Dataset Card for "book_names_and_fields"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ocolegro/book_names_and_fields
|
[
"region:us"
] |
2023-09-23T00:44:23+00:00
|
{"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "persons", "list": [{"name": "id", "dtype": "string"}, {"name": "name", "dtype": "string"}]}, {"name": "year", "dtype": "float64"}, {"name": "field_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 218318565, "num_examples": 1480516}], "download_size": 123891575, "dataset_size": 218318565}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-23T00:44:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "book_names_and_fields"
More Information needed
|
[
"# Dataset Card for \"book_names_and_fields\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"book_names_and_fields\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"book_names_and_fields\"\n\nMore Information needed"
] |
5a4068facac3381bab3728df798e6ef2f1ff2157
|
# Dataset Card for "ehrcomplete_icdfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ricardosantoss/ehrcomplete_icdfiltered
|
[
"region:us"
] |
2023-09-23T01:00:41+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "TEXT", "dtype": "string"}, {"name": "ICD9_CODE", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 412148729, "num_examples": 38701}, {"name": "test", "num_bytes": 53669368, "num_examples": 5000}, {"name": "validation", "num_bytes": 53033036, "num_examples": 5000}], "download_size": 298774749, "dataset_size": 518851133}}
|
2023-09-23T01:03:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ehrcomplete_icdfiltered"
More Information needed
|
[
"# Dataset Card for \"ehrcomplete_icdfiltered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ehrcomplete_icdfiltered\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ehrcomplete_icdfiltered\"\n\nMore Information needed"
] |
6f6b38118980f3caf252aaebeea442a2730c6d54
|
# Dataset Card for "mmm_questions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
adhok/mmm_questions
|
[
"region:us"
] |
2023-09-23T01:51:49+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1156, "num_examples": 7}], "download_size": 2227, "dataset_size": 1156}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-23T01:54:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "mmm_questions"
More Information needed
|
[
"# Dataset Card for \"mmm_questions\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"mmm_questions\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"mmm_questions\"\n\nMore Information needed"
] |
2718f66e5bfd3c56e3638a929ced9da3d31ab9cb
|
# Dataset Card for "patacon-730_reduced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
frncscp/patacon-730_reduced
|
[
"region:us"
] |
2023-09-23T02:43:07+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Patacon-False", "1": "Patacon-True"}}}}, {"name": "pca", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 3107006000.0, "num_examples": 874}, {"name": "validation", "num_bytes": 509741671.0, "num_examples": 143}, {"name": "test", "num_bytes": 1572556522.0, "num_examples": 442}], "download_size": 2929242165, "dataset_size": 5189304193.0}}
|
2023-09-23T02:45:47+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "patacon-730_reduced"
More Information needed
|
[
"# Dataset Card for \"patacon-730_reduced\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"patacon-730_reduced\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"patacon-730_reduced\"\n\nMore Information needed"
] |
0d1f11bc1ffa9278de86d09c4d81610ff0b2af7f
|
# All of libgen[dot]rs non-fiction
~4 Million records straight from the source. No summaries/descriptions.
Useful for synthetic textbook generation, when you run out of ideas, just sample book titles and topics from this dataset.
|
benxh/libgen_titles
|
[
"size_categories:1M<n<10M",
"language:en",
"language:ru",
"language:uk",
"language:de",
"language:fr",
"region:us"
] |
2023-09-23T03:26:54+00:00
|
{"language": ["en", "ru", "uk", "de", "fr"], "size_categories": ["1M<n<10M"]}
|
2023-09-23T09:03:24+00:00
|
[] |
[
"en",
"ru",
"uk",
"de",
"fr"
] |
TAGS
#size_categories-1M<n<10M #language-English #language-Russian #language-Ukrainian #language-German #language-French #region-us
|
# All of libgen[dot]rs non-fiction
~4 Million records straight from the source. No summaries/descriptions.
Useful for synthetic textbook generation, when you run out of ideas, just sample book titles and topics from this dataset.
|
[
"# All of libgen[dot]rs non-fiction\n\n~4 Million records straight from the source. No summaries/descriptions.\n\nUseful for synthetic textbook generation, when you run out of ideas, just sample book titles and topics from this dataset."
] |
[
"TAGS\n#size_categories-1M<n<10M #language-English #language-Russian #language-Ukrainian #language-German #language-French #region-us \n",
"# All of libgen[dot]rs non-fiction\n\n~4 Million records straight from the source. No summaries/descriptions.\n\nUseful for synthetic textbook generation, when you run out of ideas, just sample book titles and topics from this dataset."
] |
[
44,
60
] |
[
"passage: TAGS\n#size_categories-1M<n<10M #language-English #language-Russian #language-Ukrainian #language-German #language-French #region-us \n# All of libgen[dot]rs non-fiction\n\n~4 Million records straight from the source. No summaries/descriptions.\n\nUseful for synthetic textbook generation, when you run out of ideas, just sample book titles and topics from this dataset."
] |
d25e3d752ea92b83011c54fa8f7c5813e52f6510
|
# Dataset Card for "COVID-QA-train-80-test-10-validation-10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
minh21/COVID-QA-train-80-test-10-validation-10
|
[
"region:us"
] |
2023-09-23T04:03:34+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "contexts", "struct": [{"name": "negative", "sequence": "string"}, {"name": "positive", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 9970554, "num_examples": 1614}, {"name": "test", "num_bytes": 1259743, "num_examples": 202}, {"name": "validation", "num_bytes": 1252940, "num_examples": 202}], "download_size": 5294376, "dataset_size": 12483237}}
|
2023-09-23T13:16:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "COVID-QA-train-80-test-10-validation-10"
More Information needed
|
[
"# Dataset Card for \"COVID-QA-train-80-test-10-validation-10\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"COVID-QA-train-80-test-10-validation-10\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"COVID-QA-train-80-test-10-validation-10\"\n\nMore Information needed"
] |
c4e2fde99fe05d801d7daee608eb8798663a51a7
|
# Dataset Card for "belle_filtered_2.8M_CN"
通过启发式方法过滤后的[BelleGroup/train_3.5M_CN](https://huggingface.co/datasets/BelleGroup/train_3.5M_CN),移除了语料中明显的错误和低质量内容。
## 样例
```
{
"id":"66623028",
"conversations":[
{
"from":"human",
"value":"归纳介绍三种不同类型的电影题材"
},
{
"from":"assistant",
"value":"1. 动作片:动作片通常强调动作和战斗场面,故事情节紧凑,充满了紧张和兴奋的氛围。主角一般是具有超凡实力或特殊技能的英雄人物,往往会与反派角色进行一系列激烈的格斗,追逐和爆破等酷炫场面。例如《速度与激情》和《复仇者联盟》等。\n2. 爱情片:爱情片是以爱情为主题的电影,着重呈现角色之间的感情纠葛,展现了爱情的美好和悲壮。情节与角色之间的发展令人深思,增加了观众的感性体验。这种电影的故事情节主要以人类与人类之间的情感变化为主题。例如《泰坦尼克号》和《让子弹飞》等。\n3. 科幻片:科幻片是以未来科技和未知世界为主题的电影,通常包括时间旅行、外星人、虚拟现实、未来社会等各种奇幻的元素。这种电影描绘了一种比现实更加宏伟、更加神秘和惊奇的世界。例如《星际穿越》和《发条橙》等。"
}
]
}
{
"id":"18205437",
"conversations":[
{
"from":"human",
"value":"根据给定的食材,编写一道美食菜谱。\n鸡腿肉、蘑菇、洋葱"
},
{
"from":"assistant",
"value":"鸡肉炒蘑菇洋葱\n食材:\n- 鸡腿肉 400克,去骨去皮切成小块\n- 蘑菇 200克,切片\n- 洋葱 1个,切丝\n- 生姜 适量,切丝\n- 酱油 2勺\n- 料酒 1勺\n- 盐 适量\n- 白胡椒粉 适量\n- 植物油 适量\n做法:\n1. 在一个碗里放入切好的鸡腿肉块,加入酱油、料酒和适量的白胡椒粉,搅拌均匀,腌制10分钟。\n2. 热锅凉油,加入姜丝翻炒,放入腌好的鸡肉块翻炒至变色。\n3. 加入洋葱丝和蘑菇,翻炒2-3分钟至蘑菇变软。\n4. 加入适量的盐和少量的水,盖上锅盖焖煮5分钟直至鸡肉熟透。\n5. 最后翻炒几下,使汤汁变浓稠即可。\n温馨提示:\n1. 在腌制鸡肉时,如果想让肉质更嫩滑,可以加入少量的淀粉和蛋清。\n2. 可以根据个人口味加入其他调料,如蒜末、辣椒等。\n3. 洋葱和蘑菇可以用其他蔬菜代替,如青椒、胡萝卜等。"
},
{
"from":"human",
"value":"太好了,我会试着照着你教的做一次,但是我最近买了一部新相机,想让你推荐一些好的拍照地点。"
},
{
"from":"assistant",
"value":"当地的风景区、公园、景点等等都是比较适合拍照的地点。如果你想要一些特别的地方,可以考虑一些建筑群,街道巷子、咖啡馆、书店等具有文艺气息的地方。"
},
{
"from":"human",
"value":"嗯,这些地方都很不错,但是我还想找一些可以体验当地文化和风俗的地点,你有什么好的建议吗?"
},
{
"from":"assistant",
"value":"如果你想要更深入地了解当地文化和风俗,可以考虑参加一些游学活动,比如参观当地的博物馆、参加民俗节庆、尝试当地美食等等。此外,也可以去一些小城镇,感受那里的古朴风情。"
}
]
}
```
### 字段:
```
id: 数据id
conversations: 数据内容,以对话形式给出,包括多轮和单轮对话的数据
category: 数据类别
```
过滤后的数据构成:
| Category | Count |
|----------------|---------:|
| close qa | 112,570 |
| classification | 125,623 |
| extract | 6,400 |
| open qa | 385,306 |
| harmless | 45,968 |
| role playing | 465,782 |
| rewrite | 28,146 |
| code | 180,825 |
| translation | 29,923 |
| summarization | 99,017 |
| math | 106,202 |
| generation |1,023,643 |
| brainstorming | 193,110 |
|
larryvrh/belle_filtered_2.8M_CN
|
[
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:1M<n<10M",
"language:zh",
"license:gpl-3.0",
"region:us"
] |
2023-09-23T04:11:57+00:00
|
{"language": ["zh"], "license": "gpl-3.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "conversational"], "dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "id", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4151854934, "num_examples": 2802515}], "download_size": 2513439396, "dataset_size": 4151854934}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-23T04:28:23+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-generation #task_categories-conversational #size_categories-1M<n<10M #language-Chinese #license-gpl-3.0 #region-us
|
Dataset Card for "belle\_filtered\_2.8M\_CN"
============================================
通过启发式方法过滤后的BelleGroup/train\_3.5M\_CN,移除了语料中明显的错误和低质量内容。
样例
--
### 字段:
过滤后的数据构成:
|
[
"### 字段:\n\n\n过滤后的数据构成:"
] |
[
"TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-1M<n<10M #language-Chinese #license-gpl-3.0 #region-us \n",
"### 字段:\n\n\n过滤后的数据构成:"
] |
[
52,
13
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-1M<n<10M #language-Chinese #license-gpl-3.0 #region-us \n### 字段:\n\n\n过滤后的数据构成:"
] |
a1aacedf2bb7d79bd00dbae2ce03272621370cfa
|
# Dataset Card for "story_5_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/story_5_prompts
|
[
"region:us"
] |
2023-09-23T04:43:41+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4075, "num_examples": 12}], "download_size": 5167, "dataset_size": 4075}}
|
2023-09-23T09:18:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "story_5_prompts"
More Information needed
|
[
"# Dataset Card for \"story_5_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"story_5_prompts\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"story_5_prompts\"\n\nMore Information needed"
] |
20a5ff56b1cbeb099898ebad149fbb839b4a4292
|
# Dataset Card for "story_7_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/story_7_prompts
|
[
"region:us"
] |
2023-09-23T04:43:45+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3060, "num_examples": 10}], "download_size": 4258, "dataset_size": 3060}}
|
2023-09-23T09:18:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "story_7_prompts"
More Information needed
|
[
"# Dataset Card for \"story_7_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"story_7_prompts\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"story_7_prompts\"\n\nMore Information needed"
] |
4fa69c21219aec00490723e8c964d769657772c1
|
# Dataset Card for "story_8_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/story_8_prompts
|
[
"region:us"
] |
2023-09-23T04:43:47+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3687, "num_examples": 10}], "download_size": 5182, "dataset_size": 3687}}
|
2023-09-23T09:18:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "story_8_prompts"
More Information needed
|
[
"# Dataset Card for \"story_8_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"story_8_prompts\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"story_8_prompts\"\n\nMore Information needed"
] |
af9fa43b13534a7d32aadd718f74dd1e6d206fdc
|
# Dataset Card for "story_9_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/story_9_prompts
|
[
"region:us"
] |
2023-09-23T04:43:49+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2263, "num_examples": 7}], "download_size": 4475, "dataset_size": 2263}}
|
2023-09-23T09:18:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "story_9_prompts"
More Information needed
|
[
"# Dataset Card for \"story_9_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"story_9_prompts\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"story_9_prompts\"\n\nMore Information needed"
] |
d25481b69b81741fecc6dc161d2e303a9b410106
|
# Abandoned place in Japan
This is a dataset to train text-to-image or other models without any copyright issue.
All materials used in this dataset are CC0 (Public domain /P.D.).
## Dataset Description
- **Homepage:**
- https://www.deviantart.com/japanmaterial
-
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
JapanDegitalMaterial/Abandoned_places_in_Japan
|
[
"task_categories:text-to-image",
"language:en",
"language:ja",
"license:cc0-1.0",
"region:us"
] |
2023-09-23T04:48:23+00:00
|
{"language": ["en", "ja"], "license": "cc0-1.0", "task_categories": ["text-to-image"]}
|
2023-09-23T11:58:42+00:00
|
[] |
[
"en",
"ja"
] |
TAGS
#task_categories-text-to-image #language-English #language-Japanese #license-cc0-1.0 #region-us
|
# Abandoned place in Japan
This is a dataset to train text-to-image or other models without any copyright issue.
All materials used in this dataset are CC0 (Public domain /P.D.).
## Dataset Description
- Homepage:
- URL
-
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Abandoned place in Japan\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).",
"## Dataset Description\n\n- Homepage:\n- URL\n- \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-text-to-image #language-English #language-Japanese #license-cc0-1.0 #region-us \n",
"# Abandoned place in Japan\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).",
"## Dataset Description\n\n- Homepage:\n- URL\n- \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
36,
46,
27,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-text-to-image #language-English #language-Japanese #license-cc0-1.0 #region-us \n# Abandoned place in Japan\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).## Dataset Description\n\n- Homepage:\n- URL\n- \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
29e499b1b4b726053a50baa03c699c5f74109c04
|
# Dataset Card for Evaluation run of VMware/open-llama-7b-open-instruct
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/VMware/open-llama-7b-open-instruct
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [VMware/open-llama-7b-open-instruct](https://huggingface.co/VMware/open-llama-7b-open-instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_VMware__open-llama-7b-open-instruct",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-23T05:54:33.646620](https://huggingface.co/datasets/open-llm-leaderboard/details_VMware__open-llama-7b-open-instruct/blob/main/results_2023-09-23T05-54-33.646620.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.24811241610738255,
"em_stderr": 0.004423238498303271,
"f1": 0.3074643456375843,
"f1_stderr": 0.004402791070678147,
"acc": 0.3298042752007123,
"acc_stderr": 0.007683951336441218
},
"harness|drop|3": {
"em": 0.24811241610738255,
"em_stderr": 0.004423238498303271,
"f1": 0.3074643456375843,
"f1_stderr": 0.004402791070678147
},
"harness|gsm8k|5": {
"acc": 0.00530705079605762,
"acc_stderr": 0.0020013057209480527
},
"harness|winogrande|5": {
"acc": 0.654301499605367,
"acc_stderr": 0.013366596951934383
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_VMware__open-llama-7b-open-instruct
|
[
"region:us"
] |
2023-09-23T04:54:37+00:00
|
{"pretty_name": "Evaluation run of VMware/open-llama-7b-open-instruct", "dataset_summary": "Dataset automatically created during the evaluation run of model [VMware/open-llama-7b-open-instruct](https://huggingface.co/VMware/open-llama-7b-open-instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_VMware__open-llama-7b-open-instruct\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-23T05:54:33.646620](https://huggingface.co/datasets/open-llm-leaderboard/details_VMware__open-llama-7b-open-instruct/blob/main/results_2023-09-23T05-54-33.646620.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.24811241610738255,\n \"em_stderr\": 0.004423238498303271,\n \"f1\": 0.3074643456375843,\n \"f1_stderr\": 0.004402791070678147,\n \"acc\": 0.3298042752007123,\n \"acc_stderr\": 0.007683951336441218\n },\n \"harness|drop|3\": {\n \"em\": 0.24811241610738255,\n \"em_stderr\": 0.004423238498303271,\n \"f1\": 0.3074643456375843,\n \"f1_stderr\": 0.004402791070678147\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.00530705079605762,\n \"acc_stderr\": 0.0020013057209480527\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.654301499605367,\n \"acc_stderr\": 0.013366596951934383\n }\n}\n```", "repo_url": "https://huggingface.co/VMware/open-llama-7b-open-instruct", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_23T05_54_33.646620", "path": ["**/details_harness|drop|3_2023-09-23T05-54-33.646620.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-23T05-54-33.646620.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_23T05_54_33.646620", "path": ["**/details_harness|gsm8k|5_2023-09-23T05-54-33.646620.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-23T05-54-33.646620.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_23T05_54_33.646620", "path": ["**/details_harness|winogrande|5_2023-09-23T05-54-33.646620.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-23T05-54-33.646620.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_23T05_54_33.646620", "path": ["results_2023-09-23T05-54-33.646620.parquet"]}, {"split": "latest", "path": ["results_2023-09-23T05-54-33.646620.parquet"]}]}]}
|
2023-09-23T04:54:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of VMware/open-llama-7b-open-instruct
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model VMware/open-llama-7b-open-instruct on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-23T05:54:33.646620(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of VMware/open-llama-7b-open-instruct",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model VMware/open-llama-7b-open-instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-23T05:54:33.646620(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of VMware/open-llama-7b-open-instruct",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model VMware/open-llama-7b-open-instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-23T05:54:33.646620(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of VMware/open-llama-7b-open-instruct## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model VMware/open-llama-7b-open-instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-23T05:54:33.646620(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
d7a06ab2ac9eac3f77242630de370f962610f7d1
|
# Dataset Card for "blonde_woman_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/blonde_woman_photography_prompts
|
[
"region:us"
] |
2023-09-23T05:13:58+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 98527, "num_examples": 1000}], "download_size": 1673, "dataset_size": 98527}}
|
2023-09-23T05:14:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "blonde_woman_photography_prompts"
More Information needed
|
[
"# Dataset Card for \"blonde_woman_photography_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"blonde_woman_photography_prompts\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"blonde_woman_photography_prompts\"\n\nMore Information needed"
] |
6c44e76da62a2424ab81e3f47f39e6e109b70bc7
|
# Dataset Card for "mimic-cxr-rrg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JB/mimic-cxr-rrg
|
[
"region:us"
] |
2023-09-23T05:22:47+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "impression", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 14124813.0, "num_examples": 100}], "download_size": 14118845, "dataset_size": 14124813.0}}
|
2023-09-23T05:22:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "mimic-cxr-rrg"
More Information needed
|
[
"# Dataset Card for \"mimic-cxr-rrg\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"mimic-cxr-rrg\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"mimic-cxr-rrg\"\n\nMore Information needed"
] |
02191ffe926ca9c726c63d9c306a832d40b521cd
|
# Dataset Card for "product_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/product_photography_prompts
|
[
"region:us"
] |
2023-09-23T05:22:53+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 97294, "num_examples": 1000}], "download_size": 1640, "dataset_size": 97294}}
|
2023-09-23T05:22:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "product_photography_prompts"
More Information needed
|
[
"# Dataset Card for \"product_photography_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"product_photography_prompts\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"product_photography_prompts\"\n\nMore Information needed"
] |
128238f157e1bc0e44803c853ccdd9607f48257e
|
# Dataset Card for "catalogue_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/catalogue_photography_prompts
|
[
"region:us"
] |
2023-09-23T05:25:33+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 114345, "num_examples": 1000}], "download_size": 2050, "dataset_size": 114345}}
|
2023-09-23T05:25:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "catalogue_photography_prompts"
More Information needed
|
[
"# Dataset Card for \"catalogue_photography_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"catalogue_photography_prompts\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"catalogue_photography_prompts\"\n\nMore Information needed"
] |
2623d73d313ab95d7f079c4ce87f4924ea375c2e
|
# Dataset Card for "documentary_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/documentary_photography_prompts
|
[
"region:us"
] |
2023-09-23T05:27:19+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 87154, "num_examples": 1000}], "download_size": 1694, "dataset_size": 87154}}
|
2023-09-23T05:27:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "documentary_photography_prompts"
More Information needed
|
[
"# Dataset Card for \"documentary_photography_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"documentary_photography_prompts\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"documentary_photography_prompts\"\n\nMore Information needed"
] |
dac91cc0da2f680fd803d04d70739f2a76cd2709
|
# Dataset Card for "tilt_shift_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/tilt_shift_photography_prompts
|
[
"region:us"
] |
2023-09-23T05:29:09+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62449, "num_examples": 1000}], "download_size": 1523, "dataset_size": 62449}}
|
2023-09-23T05:29:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "tilt_shift_photography_prompts"
More Information needed
|
[
"# Dataset Card for \"tilt_shift_photography_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"tilt_shift_photography_prompts\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"tilt_shift_photography_prompts\"\n\nMore Information needed"
] |
c59c6d00382b341ca9328aced4d3bbb5bfd7e005
|
# Dataset Card for "luxurious_food_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/luxurious_food_photography_prompts
|
[
"region:us"
] |
2023-09-23T05:31:08+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 116535, "num_examples": 1000}], "download_size": 1927, "dataset_size": 116535}}
|
2023-09-23T05:31:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "luxurious_food_photography_prompts"
More Information needed
|
[
"# Dataset Card for \"luxurious_food_photography_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"luxurious_food_photography_prompts\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"luxurious_food_photography_prompts\"\n\nMore Information needed"
] |
890298eeb7ffedd47b63fe59929cc606a17d4f54
|
# Dataset Card for "black_and_white_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/black_and_white_photography_prompts
|
[
"region:us"
] |
2023-09-23T05:33:06+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 190817, "num_examples": 1000}], "download_size": 4063, "dataset_size": 190817}}
|
2023-09-23T05:33:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "black_and_white_photography_prompts"
More Information needed
|
[
"# Dataset Card for \"black_and_white_photography_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"black_and_white_photography_prompts\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"black_and_white_photography_prompts\"\n\nMore Information needed"
] |
a9f343ac2824f1a95140a3a0d3928b5398e9af6a
|
# Dataset Card for Evaluation run of rinna/bilingual-gpt-neox-4b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/rinna/bilingual-gpt-neox-4b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [rinna/bilingual-gpt-neox-4b](https://huggingface.co/rinna/bilingual-gpt-neox-4b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-23T06:39:58.316038](https://huggingface.co/datasets/open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b/blob/main/results_2023-09-23T06-39-58.316038.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 0.0019494546979865776,
"f1_stderr": 0.0001656985868155588,
"acc": 0.25927387529597473,
"acc_stderr": 0.007021406854444189
},
"harness|drop|3": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 0.0019494546979865776,
"f1_stderr": 0.0001656985868155588
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5185477505919495,
"acc_stderr": 0.014042813708888378
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b
|
[
"region:us"
] |
2023-09-23T05:40:02+00:00
|
{"pretty_name": "Evaluation run of rinna/bilingual-gpt-neox-4b", "dataset_summary": "Dataset automatically created during the evaluation run of model [rinna/bilingual-gpt-neox-4b](https://huggingface.co/rinna/bilingual-gpt-neox-4b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-23T06:39:58.316038](https://huggingface.co/datasets/open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b/blob/main/results_2023-09-23T06-39-58.316038.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n \"f1\": 0.0019494546979865776,\n \"f1_stderr\": 0.0001656985868155588,\n \"acc\": 0.25927387529597473,\n \"acc_stderr\": 0.007021406854444189\n },\n \"harness|drop|3\": {\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n \"f1\": 0.0019494546979865776,\n \"f1_stderr\": 0.0001656985868155588\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5185477505919495,\n \"acc_stderr\": 0.014042813708888378\n }\n}\n```", "repo_url": "https://huggingface.co/rinna/bilingual-gpt-neox-4b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_23T06_39_58.316038", "path": ["**/details_harness|drop|3_2023-09-23T06-39-58.316038.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-23T06-39-58.316038.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_23T06_39_58.316038", "path": ["**/details_harness|gsm8k|5_2023-09-23T06-39-58.316038.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-23T06-39-58.316038.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_23T06_39_58.316038", "path": ["**/details_harness|winogrande|5_2023-09-23T06-39-58.316038.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-23T06-39-58.316038.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_23T06_39_58.316038", "path": ["results_2023-09-23T06-39-58.316038.parquet"]}, {"split": "latest", "path": ["results_2023-09-23T06-39-58.316038.parquet"]}]}]}
|
2023-09-23T05:40:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of rinna/bilingual-gpt-neox-4b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model rinna/bilingual-gpt-neox-4b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-23T06:39:58.316038(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of rinna/bilingual-gpt-neox-4b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/bilingual-gpt-neox-4b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-23T06:39:58.316038(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of rinna/bilingual-gpt-neox-4b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/bilingual-gpt-neox-4b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-23T06:39:58.316038(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
22,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of rinna/bilingual-gpt-neox-4b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/bilingual-gpt-neox-4b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-23T06:39:58.316038(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
4636ab982a156629e80ef42fd056dc84f653e84f
|
# Dataset Card for "national_geographic_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/national_geographic_photography_prompts
|
[
"region:us"
] |
2023-09-23T05:42:27+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 145497, "num_examples": 1000}], "download_size": 5368, "dataset_size": 145497}}
|
2023-09-23T05:42:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "national_geographic_photography_prompts"
More Information needed
|
[
"# Dataset Card for \"national_geographic_photography_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"national_geographic_photography_prompts\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"national_geographic_photography_prompts\"\n\nMore Information needed"
] |
f6e0e9ac72595a4ad5c84932765e1f6ea3a0831a
|
# Dataset Card for Evaluation run of JosephusCheung/Guanaco
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/JosephusCheung/Guanaco
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [JosephusCheung/Guanaco](https://huggingface.co/JosephusCheung/Guanaco) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_JosephusCheung__Guanaco",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-23T06:44:02.813633](https://huggingface.co/datasets/open-llm-leaderboard/details_JosephusCheung__Guanaco/blob/main/results_2023-09-23T06-44-02.813633.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.23343120805369127,
"em_stderr": 0.004332062137833453,
"f1": 0.2960843120805377,
"f1_stderr": 0.004351433413685765,
"acc": 0.34333070244672453,
"acc_stderr": 0.006518256048373988
},
"harness|drop|3": {
"em": 0.23343120805369127,
"em_stderr": 0.004332062137833453,
"f1": 0.2960843120805377,
"f1_stderr": 0.004351433413685765
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.6866614048934491,
"acc_stderr": 0.013036512096747976
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_JosephusCheung__Guanaco
|
[
"region:us"
] |
2023-09-23T05:44:06+00:00
|
{"pretty_name": "Evaluation run of JosephusCheung/Guanaco", "dataset_summary": "Dataset automatically created during the evaluation run of model [JosephusCheung/Guanaco](https://huggingface.co/JosephusCheung/Guanaco) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_JosephusCheung__Guanaco\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-23T06:44:02.813633](https://huggingface.co/datasets/open-llm-leaderboard/details_JosephusCheung__Guanaco/blob/main/results_2023-09-23T06-44-02.813633.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.23343120805369127,\n \"em_stderr\": 0.004332062137833453,\n \"f1\": 0.2960843120805377,\n \"f1_stderr\": 0.004351433413685765,\n \"acc\": 0.34333070244672453,\n \"acc_stderr\": 0.006518256048373988\n },\n \"harness|drop|3\": {\n \"em\": 0.23343120805369127,\n \"em_stderr\": 0.004332062137833453,\n \"f1\": 0.2960843120805377,\n \"f1_stderr\": 0.004351433413685765\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6866614048934491,\n \"acc_stderr\": 0.013036512096747976\n }\n}\n```", "repo_url": "https://huggingface.co/JosephusCheung/Guanaco", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_23T06_44_02.813633", "path": ["**/details_harness|drop|3_2023-09-23T06-44-02.813633.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-23T06-44-02.813633.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_23T06_44_02.813633", "path": ["**/details_harness|gsm8k|5_2023-09-23T06-44-02.813633.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-23T06-44-02.813633.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_23T06_44_02.813633", "path": ["**/details_harness|winogrande|5_2023-09-23T06-44-02.813633.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-23T06-44-02.813633.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_23T06_44_02.813633", "path": ["results_2023-09-23T06-44-02.813633.parquet"]}, {"split": "latest", "path": ["results_2023-09-23T06-44-02.813633.parquet"]}]}]}
|
2023-09-23T05:44:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of JosephusCheung/Guanaco
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model JosephusCheung/Guanaco on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-23T06:44:02.813633(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of JosephusCheung/Guanaco",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model JosephusCheung/Guanaco on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-23T06:44:02.813633(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of JosephusCheung/Guanaco",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model JosephusCheung/Guanaco on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-23T06:44:02.813633(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
17,
31,
165,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of JosephusCheung/Guanaco## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model JosephusCheung/Guanaco on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-23T06:44:02.813633(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
f9c00988b4e5ea247e9ff735e424983e9f7891bd
|
# Dataset Card for "unsplash_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/unsplash_photography_prompts
|
[
"region:us"
] |
2023-09-23T05:45:37+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 148707, "num_examples": 1000}], "download_size": 2495, "dataset_size": 148707}}
|
2023-09-23T05:45:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "unsplash_photography_prompts"
More Information needed
|
[
"# Dataset Card for \"unsplash_photography_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"unsplash_photography_prompts\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"unsplash_photography_prompts\"\n\nMore Information needed"
] |
18d2c2dcf697519e82b71e7c124addd6258506a2
|
# Dataset Card for "truyenfull"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AnhTong/truyenfull
|
[
"region:us"
] |
2023-09-23T05:46:06+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "link", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "ds_1", "num_bytes": 688765404, "num_examples": 47546}, {"name": "ds_2", "num_bytes": 686452540, "num_examples": 49325}, {"name": "ds_3", "num_bytes": 662112505, "num_examples": 46766}, {"name": "ds_4", "num_bytes": 631547999, "num_examples": 47222}, {"name": "ds_5", "num_bytes": 645861526, "num_examples": 49358}, {"name": "ds_6", "num_bytes": 669993661, "num_examples": 49112}, {"name": "ds_7", "num_bytes": 662999345, "num_examples": 48904}, {"name": "ds_8", "num_bytes": 713727150, "num_examples": 49245}, {"name": "ds_9", "num_bytes": 651720408, "num_examples": 48605}, {"name": "ds_10", "num_bytes": 966575566, "num_examples": 48809}, {"name": "ds_12", "num_bytes": 762515180, "num_examples": 49725}, {"name": "ds_13", "num_bytes": 686909655, "num_examples": 48973}, {"name": "ds_14", "num_bytes": 610358320, "num_examples": 48564}, {"name": "ds_15", "num_bytes": 616740599, "num_examples": 49389}], "download_size": 4862424797, "dataset_size": 9656279858}}
|
2023-09-23T09:26:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "truyenfull"
More Information needed
|
[
"# Dataset Card for \"truyenfull\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"truyenfull\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"truyenfull\"\n\nMore Information needed"
] |
03d4380e780d5d34a7b280745b849b15c0f89683
|
# Dataset Card for "vintage_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/vintage_photography_prompts
|
[
"region:us"
] |
2023-09-23T05:52:52+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 149047, "num_examples": 1000}], "download_size": 2331, "dataset_size": 149047}}
|
2023-09-23T05:52:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "vintage_photography_prompts"
More Information needed
|
[
"# Dataset Card for \"vintage_photography_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"vintage_photography_prompts\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"vintage_photography_prompts\"\n\nMore Information needed"
] |
19e24048972e372e306e3ad5ef75f6788037080a
|
# Dataset Card for "time_lapse_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/time_lapse_photography_prompts
|
[
"region:us"
] |
2023-09-23T05:57:13+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 114361, "num_examples": 1000}], "download_size": 3914, "dataset_size": 114361}}
|
2023-09-23T05:57:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "time_lapse_photography_prompts"
More Information needed
|
[
"# Dataset Card for \"time_lapse_photography_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"time_lapse_photography_prompts\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"time_lapse_photography_prompts\"\n\nMore Information needed"
] |
8e2a2f8ac158b9c5a806bf2b7ccdb94dfe0183fd
|
# Dataset Card for "fine_art_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/fine_art_photography_prompts
|
[
"region:us"
] |
2023-09-23T06:06:02+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 132266, "num_examples": 1000}], "download_size": 5939, "dataset_size": 132266}}
|
2023-09-23T06:06:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fine_art_photography_prompts"
More Information needed
|
[
"# Dataset Card for \"fine_art_photography_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fine_art_photography_prompts\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fine_art_photography_prompts\"\n\nMore Information needed"
] |
6972e886bab3dc8c21d502a9c198425c35c271f3
|
This is the story elements extracted from the TinyStories dataset to be used as randomization data for short story generation.
|
Corianas/StorySalt_Concepts
|
[
"license:cdla-sharing-1.0",
"region:us"
] |
2023-09-23T06:49:24+00:00
|
{"license": "cdla-sharing-1.0"}
|
2023-09-28T08:24:44+00:00
|
[] |
[] |
TAGS
#license-cdla-sharing-1.0 #region-us
|
This is the story elements extracted from the TinyStories dataset to be used as randomization data for short story generation.
|
[] |
[
"TAGS\n#license-cdla-sharing-1.0 #region-us \n"
] |
[
17
] |
[
"passage: TAGS\n#license-cdla-sharing-1.0 #region-us \n"
] |
8ad2c1c3a5a0717ad303826553337437c4f9a384
|
# DISC-Law-SFT Dataset
Legal Intelligent systems in Chinese require a combination of various abilities, including legal text understanding and generation. To achieve this, we have constructed a high-quality supervised fine-tuning dataset called DISC-Law-SFT, which covers different legal scenarios such as legal information extraction, legal judgment prediction, legal document summarization, and legal question answering. DISC-Law-SFT comprises two subsets, DISC-Law-SFT-Pair and DISC-Law-SFT-Triplet. The former aims to introduce legal reasoning abilities to the LLM, while the latter helps enhance the model's capability to utilize external legal knowledge. For more detailed information, please refer to our [technical report](https://arxiv.org/abs/2309.11325). The distribution of the dataset is:
<img src="" alt="" width=""/>
<table>
<tr>
<th>Dataset</th>
<th>Task/Source</th>
<th>Size</th>
<th>Scenario</th>
</tr>
<tr>
<td rowspan="10">DISC-Law-SFT-Pair</td>
<td>Legal information extraction</td>
<td>32K</td>
<td rowspan="7">Legal professional assistant</td>
</tr>
<tr>
<td>Legal event detection</td>
<td>27K</td>
</tr>
<tr>
<td>Legal case classification</td>
<td>20K</td>
</tr>
<tr>
<td>Legal judgement prediction</td>
<td>11K</td>
</tr>
<tr>
<td>Legal case matching</td>
<td>8K</td>
</tr>
<tr>
<td>Legal text summarization</td>
<td>9K</td>
</tr>
<tr>
<td>Judicial public opinion summarization</td>
<td>6K</td>
</tr>
<tr>
<td>Legal question answering</td>
<td>93K</td>
<td>Legal consultation services</td>
</tr>
<tr>
<td>Legal reading comprehension</td>
<td>38K</td>
<td rowspan="2">Judicial examination assistant</td>
</tr>
<tr>
<td>Judicial examination</td>
<td>12K</td>
</tr>
<tr>
<td rowspan="2">DISC-Law-SFT-Triple</td>
<td>Legal judgement prediction</td>
<td>16K</td>
<td>Legal professional assistant</td>
</tr>
<tr>
<td>Legal question answering</td>
<td>23K</td>
<td>Legal consultation services</td>
</tr>
<tr>
<td rowspan="2">General</td>
<td>Alpaca-GPT4</td>
<td>48K</td>
<td rowspan="2">General scenarios</td>
</tr>
<tr>
<td>Firefly</td>
<td>60K</td>
</tr>
<tr>
<td>Total</td>
<td colspan="3">403K</td>
</tr>
</table>
We currently open-source most of the DISC-Law-SFT Dataset.
More detail and news check our [homepage](https://github.com/FudanDISC/DISC-LawLLM) !
|
ShengbinYue/DISC-Law-SFT
|
[
"size_categories:100M<n<1B",
"language:zh",
"license:apache-2.0",
"legal",
"arxiv:2309.11325",
"region:us"
] |
2023-09-23T06:56:07+00:00
|
{"language": ["zh"], "license": "apache-2.0", "size_categories": ["100M<n<1B"], "tags": ["legal"]}
|
2023-09-25T13:47:18+00:00
|
[
"2309.11325"
] |
[
"zh"
] |
TAGS
#size_categories-100M<n<1B #language-Chinese #license-apache-2.0 #legal #arxiv-2309.11325 #region-us
|
DISC-Law-SFT Dataset
====================
Legal Intelligent systems in Chinese require a combination of various abilities, including legal text understanding and generation. To achieve this, we have constructed a high-quality supervised fine-tuning dataset called DISC-Law-SFT, which covers different legal scenarios such as legal information extraction, legal judgment prediction, legal document summarization, and legal question answering. DISC-Law-SFT comprises two subsets, DISC-Law-SFT-Pair and DISC-Law-SFT-Triplet. The former aims to introduce legal reasoning abilities to the LLM, while the latter helps enhance the model's capability to utilize external legal knowledge. For more detailed information, please refer to our technical report. The distribution of the dataset is:
![]()
Dataset: Legal question answering, Task/Source: 93K, Size: Legal consultation services
Dataset: Legal question answering, Task/Source: 23K, Size: Legal consultation services
We currently open-source most of the DISC-Law-SFT Dataset.
More detail and news check our homepage !
|
[] |
[
"TAGS\n#size_categories-100M<n<1B #language-Chinese #license-apache-2.0 #legal #arxiv-2309.11325 #region-us \n"
] |
[
41
] |
[
"passage: TAGS\n#size_categories-100M<n<1B #language-Chinese #license-apache-2.0 #legal #arxiv-2309.11325 #region-us \n"
] |
ab27b3cefdacb2bff537825e68b0c1dc2348bc61
|
# "wikipedia-zh-yue-qa"
Question and answer extracted from Cantonese Wikipedia
|
indiejoseph/wikipedia-zh-yue-qa
|
[
"region:us"
] |
2023-09-23T07:15:04+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5335035, "num_examples": 35415}], "download_size": 3283403, "dataset_size": 5335035}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T09:01:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# "wikipedia-zh-yue-qa"
Question and answer extracted from Cantonese Wikipedia
|
[
"# \"wikipedia-zh-yue-qa\"\n\nQuestion and answer extracted from Cantonese Wikipedia"
] |
[
"TAGS\n#region-us \n",
"# \"wikipedia-zh-yue-qa\"\n\nQuestion and answer extracted from Cantonese Wikipedia"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# \"wikipedia-zh-yue-qa\"\n\nQuestion and answer extracted from Cantonese Wikipedia"
] |
9ee5228bdcc7b2025fd773e4d07a0116a2465401
|
# Dataset Card for "srbd-test1-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tanvirsrbd1/srbd-test1-1
|
[
"region:us"
] |
2023-09-23T07:44:09+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "xml", "dtype": "string"}, {"name": "html", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 49670989, "num_examples": 1810}], "download_size": 5699038, "dataset_size": 49670989}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-23T07:44:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "srbd-test1-1"
More Information needed
|
[
"# Dataset Card for \"srbd-test1-1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"srbd-test1-1\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"srbd-test1-1\"\n\nMore Information needed"
] |
b514c08adb90057e9b4c79ede4864fb4aa48bd26
|

# Description:
This dataset is a collection of text generated by a variety of AI models, including Falcon 180B, Vicuna 33B, Llama 70b, GPT-3.5, Claude 2, Claude Instant, Bard, Bing Chat (Creative, Balanced, Precise modes). The dataset can be used for a variety of purposes, including instructions, question answering, summarization, and paraphrasing.
# Dataset Format:
The dataset is in a JSON format, with each entry containing the following fields:
- system: system prompt
- user: user prompt
- assistant: assistant response
# Citation:
```
@dataset{opensparrow,
author = {Kaleido Singapore},
title = {opensparrow},
url = {https://huggingface.co/datasets/KaleidoSG/opensparrow},
year = {2023},
license = {Creative Commons Attribution 4.0 International License}
}
```
|
KaleidoSG/opensparrow
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-09-23T07:56:24+00:00
|
{"license": "cc-by-4.0"}
|
2023-09-23T19:49:43+00:00
|
[] |
[] |
TAGS
#license-cc-by-4.0 #region-us
|
!URL
# Description:
This dataset is a collection of text generated by a variety of AI models, including Falcon 180B, Vicuna 33B, Llama 70b, GPT-3.5, Claude 2, Claude Instant, Bard, Bing Chat (Creative, Balanced, Precise modes). The dataset can be used for a variety of purposes, including instructions, question answering, summarization, and paraphrasing.
# Dataset Format:
The dataset is in a JSON format, with each entry containing the following fields:
- system: system prompt
- user: user prompt
- assistant: assistant response
:
|
[
"# Description: \nThis dataset is a collection of text generated by a variety of AI models, including Falcon 180B, Vicuna 33B, Llama 70b, GPT-3.5, Claude 2, Claude Instant, Bard, Bing Chat (Creative, Balanced, Precise modes). The dataset can be used for a variety of purposes, including instructions, question answering, summarization, and paraphrasing.",
"# Dataset Format: \nThe dataset is in a JSON format, with each entry containing the following fields:\n- system: system prompt\n- user: user prompt\n- assistant: assistant response\n\n:"
] |
[
"TAGS\n#license-cc-by-4.0 #region-us \n",
"# Description: \nThis dataset is a collection of text generated by a variety of AI models, including Falcon 180B, Vicuna 33B, Llama 70b, GPT-3.5, Claude 2, Claude Instant, Bard, Bing Chat (Creative, Balanced, Precise modes). The dataset can be used for a variety of purposes, including instructions, question answering, summarization, and paraphrasing.",
"# Dataset Format: \nThe dataset is in a JSON format, with each entry containing the following fields:\n- system: system prompt\n- user: user prompt\n- assistant: assistant response\n\n:"
] |
[
15,
93,
41
] |
[
"passage: TAGS\n#license-cc-by-4.0 #region-us \n# Description: \nThis dataset is a collection of text generated by a variety of AI models, including Falcon 180B, Vicuna 33B, Llama 70b, GPT-3.5, Claude 2, Claude Instant, Bard, Bing Chat (Creative, Balanced, Precise modes). The dataset can be used for a variety of purposes, including instructions, question answering, summarization, and paraphrasing.# Dataset Format: \nThe dataset is in a JSON format, with each entry containing the following fields:\n- system: system prompt\n- user: user prompt\n- assistant: assistant response\n\n:"
] |
e8cdcaddc1cf3618561b0da518ec5b6b0523ff21
|
# Dataset Card for Evaluation run of openaccess-ai-collective/manticore-13b-chat-pyg
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [openaccess-ai-collective/manticore-13b-chat-pyg](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_openaccess-ai-collective__manticore-13b-chat-pyg",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-23T08:58:22.598379](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__manticore-13b-chat-pyg/blob/main/results_2023-09-23T08-58-22.598379.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.02925755033557047,
"em_stderr": 0.0017258801842771152,
"f1": 0.09186136744966467,
"f1_stderr": 0.0021533865918944134,
"acc": 0.4337145226735951,
"acc_stderr": 0.009944810794409672
},
"harness|drop|3": {
"em": 0.02925755033557047,
"em_stderr": 0.0017258801842771152,
"f1": 0.09186136744966467,
"f1_stderr": 0.0021533865918944134
},
"harness|gsm8k|5": {
"acc": 0.09552691432903715,
"acc_stderr": 0.008096605771155745
},
"harness|winogrande|5": {
"acc": 0.7719021310181531,
"acc_stderr": 0.0117930158176636
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_openaccess-ai-collective__manticore-13b-chat-pyg
|
[
"region:us"
] |
2023-09-23T07:58:26+00:00
|
{"pretty_name": "Evaluation run of openaccess-ai-collective/manticore-13b-chat-pyg", "dataset_summary": "Dataset automatically created during the evaluation run of model [openaccess-ai-collective/manticore-13b-chat-pyg](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openaccess-ai-collective__manticore-13b-chat-pyg\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-23T08:58:22.598379](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__manticore-13b-chat-pyg/blob/main/results_2023-09-23T08-58-22.598379.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.02925755033557047,\n \"em_stderr\": 0.0017258801842771152,\n \"f1\": 0.09186136744966467,\n \"f1_stderr\": 0.0021533865918944134,\n \"acc\": 0.4337145226735951,\n \"acc_stderr\": 0.009944810794409672\n },\n \"harness|drop|3\": {\n \"em\": 0.02925755033557047,\n \"em_stderr\": 0.0017258801842771152,\n \"f1\": 0.09186136744966467,\n \"f1_stderr\": 0.0021533865918944134\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09552691432903715,\n \"acc_stderr\": 0.008096605771155745\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7719021310181531,\n \"acc_stderr\": 0.0117930158176636\n }\n}\n```", "repo_url": "https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_23T08_58_22.598379", "path": ["**/details_harness|drop|3_2023-09-23T08-58-22.598379.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-23T08-58-22.598379.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_23T08_58_22.598379", "path": ["**/details_harness|gsm8k|5_2023-09-23T08-58-22.598379.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-23T08-58-22.598379.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_23T08_58_22.598379", "path": ["**/details_harness|winogrande|5_2023-09-23T08-58-22.598379.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-23T08-58-22.598379.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_23T08_58_22.598379", "path": ["results_2023-09-23T08-58-22.598379.parquet"]}, {"split": "latest", "path": ["results_2023-09-23T08-58-22.598379.parquet"]}]}]}
|
2023-09-23T07:58:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of openaccess-ai-collective/manticore-13b-chat-pyg
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model openaccess-ai-collective/manticore-13b-chat-pyg on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-23T08:58:22.598379(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of openaccess-ai-collective/manticore-13b-chat-pyg",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model openaccess-ai-collective/manticore-13b-chat-pyg on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-23T08:58:22.598379(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of openaccess-ai-collective/manticore-13b-chat-pyg",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model openaccess-ai-collective/manticore-13b-chat-pyg on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-23T08:58:22.598379(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
27,
31,
175,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of openaccess-ai-collective/manticore-13b-chat-pyg## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model openaccess-ai-collective/manticore-13b-chat-pyg on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-23T08:58:22.598379(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
01b8044c83733572ba66c94cbfec2cd1a4bbea48
|
# Dataset Card for Evaluation run of sartmis1/starcoder-finetune-selfinstruct
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/sartmis1/starcoder-finetune-selfinstruct
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [sartmis1/starcoder-finetune-selfinstruct](https://huggingface.co/sartmis1/starcoder-finetune-selfinstruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_sartmis1__starcoder-finetune-selfinstruct",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-23T09:06:26.158683](https://huggingface.co/datasets/open-llm-leaderboard/details_sartmis1__starcoder-finetune-selfinstruct/blob/main/results_2023-09-23T09-06-26.158683.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0012583892617449664,
"em_stderr": 0.00036305608931189545,
"f1": 0.04220742449664442,
"f1_stderr": 0.0011048606881245398,
"acc": 0.31919735419373096,
"acc_stderr": 0.01022815770603217
},
"harness|drop|3": {
"em": 0.0012583892617449664,
"em_stderr": 0.00036305608931189545,
"f1": 0.04220742449664442,
"f1_stderr": 0.0011048606881245398
},
"harness|gsm8k|5": {
"acc": 0.060652009097801364,
"acc_stderr": 0.0065747333814057925
},
"harness|winogrande|5": {
"acc": 0.5777426992896606,
"acc_stderr": 0.013881582030658549
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_sartmis1__starcoder-finetune-selfinstruct
|
[
"region:us"
] |
2023-09-23T08:06:30+00:00
|
{"pretty_name": "Evaluation run of sartmis1/starcoder-finetune-selfinstruct", "dataset_summary": "Dataset automatically created during the evaluation run of model [sartmis1/starcoder-finetune-selfinstruct](https://huggingface.co/sartmis1/starcoder-finetune-selfinstruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_sartmis1__starcoder-finetune-selfinstruct\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-23T09:06:26.158683](https://huggingface.co/datasets/open-llm-leaderboard/details_sartmis1__starcoder-finetune-selfinstruct/blob/main/results_2023-09-23T09-06-26.158683.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0012583892617449664,\n \"em_stderr\": 0.00036305608931189545,\n \"f1\": 0.04220742449664442,\n \"f1_stderr\": 0.0011048606881245398,\n \"acc\": 0.31919735419373096,\n \"acc_stderr\": 0.01022815770603217\n },\n \"harness|drop|3\": {\n \"em\": 0.0012583892617449664,\n \"em_stderr\": 0.00036305608931189545,\n \"f1\": 0.04220742449664442,\n \"f1_stderr\": 0.0011048606881245398\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.060652009097801364,\n \"acc_stderr\": 0.0065747333814057925\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5777426992896606,\n \"acc_stderr\": 0.013881582030658549\n }\n}\n```", "repo_url": "https://huggingface.co/sartmis1/starcoder-finetune-selfinstruct", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_23T09_06_26.158683", "path": ["**/details_harness|drop|3_2023-09-23T09-06-26.158683.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-23T09-06-26.158683.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_23T09_06_26.158683", "path": ["**/details_harness|gsm8k|5_2023-09-23T09-06-26.158683.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-23T09-06-26.158683.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_23T09_06_26.158683", "path": ["**/details_harness|winogrande|5_2023-09-23T09-06-26.158683.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-23T09-06-26.158683.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_23T09_06_26.158683", "path": ["results_2023-09-23T09-06-26.158683.parquet"]}, {"split": "latest", "path": ["results_2023-09-23T09-06-26.158683.parquet"]}]}]}
|
2023-09-23T08:06:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of sartmis1/starcoder-finetune-selfinstruct
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model sartmis1/starcoder-finetune-selfinstruct on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-23T09:06:26.158683(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of sartmis1/starcoder-finetune-selfinstruct",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model sartmis1/starcoder-finetune-selfinstruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-23T09:06:26.158683(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of sartmis1/starcoder-finetune-selfinstruct",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model sartmis1/starcoder-finetune-selfinstruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-23T09:06:26.158683(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of sartmis1/starcoder-finetune-selfinstruct## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model sartmis1/starcoder-finetune-selfinstruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-23T09:06:26.158683(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
1ab57fe8bab0c55ac88944b0db2ed646a71eda79
|
# Dataset Card for "srbd-test1-1_annotated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tanvirsrbd1/srbd-test1-1_annotated
|
[
"region:us"
] |
2023-09-23T08:15:25+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "xml", "dtype": "string"}, {"name": "html", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "annotated", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 35197381.665745854, "num_examples": 1265}], "download_size": 3944835, "dataset_size": 35197381.665745854}}
|
2023-09-23T08:15:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "srbd-test1-1_annotated"
More Information needed
|
[
"# Dataset Card for \"srbd-test1-1_annotated\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"srbd-test1-1_annotated\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"srbd-test1-1_annotated\"\n\nMore Information needed"
] |
e9f0ffabac3ec5b1c855bd4ad3dc8ed3032ca952
|
# Dataset Card for CoNLL-2002
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepage](https://www.clips.uantwerpen.be/conll2002/ner/)
- **Repository:** [github](https://github.com/teropa/nlp/tree/master/resources/corpora/conll2002)
- **Paper:** [paper](https://www.aclweb.org/anthology/W02-2024/)
- **Point of Contact:** [Erik Tjong Kim Sang]([email protected])
### Dataset Summary
Named entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:
[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
The shared task of CoNLL-2002 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for at least two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. Information sources other than the training data may be used in this shared task. We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).
### Supported Tasks and Leaderboards
Named Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.
After 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.
- `named-entity-recognition`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.
- `parsing`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A part-of-speech tag is correct only if it is equal to the corresponding tag in the data.
### Languages
There are two languages available : Spanish (es) and Dutch (nl).
## Dataset Structure
### Data Instances
The examples look like this :
```
{
'id': '0',
'document_id': 0,
'sentence_id': 0,
'tokens': ['Melbourne', '(', 'Australia', ')', ',', '25', 'may', '(', 'EFE', ')', '.'],
'pos_tags': [29, 21, 29, 22, 13, 59, 28, 21, 28, 22, 20],
'ner_tags': [5, 0, 5, 0, 0, 0, 0, 0, 3, 0, 0]
}
```
The original data files within the Dutch sub-dataset have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
### Data Fields
- `id`: id of the sample
- `document_id`: an `int32` feature tracking which document the sample is from.
- `sentence_id`: an `int32` feature tracking which sentence in this document the sample is from.
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
- `pos_tags`: the POS tags of each token
The POS tags correspond to this list for Spanish:
```
'AO', 'AQ', 'CC', 'CS', 'DA', 'DE', 'DD', 'DI', 'DN', 'DP', 'DT', 'Faa', 'Fat', 'Fc', 'Fd', 'Fe', 'Fg', 'Fh', 'Fia', 'Fit', 'Fp', 'Fpa', 'Fpt', 'Fs', 'Ft', 'Fx', 'Fz', 'I', 'NC', 'NP', 'P0', 'PD', 'PI', 'PN', 'PP', 'PR', 'PT', 'PX', 'RG', 'RN', 'SP', 'VAI', 'VAM', 'VAN', 'VAP', 'VAS', 'VMG', 'VMI', 'VMM', 'VMN', 'VMP', 'VMS', 'VSG', 'VSI', 'VSM', 'VSN', 'VSP', 'VSS', 'Y', 'Z'
```
And this list for Dutch:
```
'Adj', 'Adv', 'Art', 'Conj', 'Int', 'Misc', 'N', 'Num', 'Prep', 'Pron', 'Punc', 'V'
```
The NER tags correspond to this list:
```
"O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC",
```
The NER tags have the same format as in the chunking task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC).
It is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked.
### Data Splits
For both configurations (Spanish and Dutch), there are three splits.
The original splits were named `train`, `testa` and `testb` and they correspond to the `train`, `validation` and `test` splits.
The splits have the following sizes :
| | train | validation | test |
| ----- |-------:|------------:|------:|
| N. Examples (Spanish) | 8324 | 1916 | 1518 |
| N. Examples (Dutch) | 15807 | 2896 | 5196 |
## Dataset Creation
### Curation Rationale
The dataset was introduced to introduce new resources to two languages that were under-served for statistical machine learning at the time, Dutch and Spanish.
[More Information Needed]
### Source Data
The Spanish data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.
The Dutch data consist of four editions of the Belgian newspaper "De Morgen" of 2000 (June 2, July 1, August 1 and September 1).
#### Initial Data Collection and Normalization
The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.
#### Who are the source language producers?
The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.
### Annotations
#### Annotation process
For the Dutch data, the annotator has followed the MITRE and SAIC guidelines for named entity recognition (Chinchor et al., 1999) as well as possible.
#### Who are the annotators?
The Spanish data annotation was carried out by the TALP Research Center of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC) of the University of Barcelona (UB).
The Dutch data was annotated as a part of the Atranos project at the University of Antwerp.
### Personal and Sensitive Information
The data is sourced from newspaper source and only contains mentions of public figures or individuals
## Considerations for Using the Data
### Social Impact of Dataset
Named Entity Recognition systems can be used to efficiently index news text, allowing to easily gather all information pertaining to an organization or individual. Making such resources widely available in languages other than English can support better research and user experience for a larger part of the world's population. At the same time, better indexing and discoverability can also enable surveillance by state actors.
### Discussion of Biases
News text reproduces the biases of society, and any system trained on news data should be cognizant of these limitations and the risk for models to learn spurious correlations in this context, for example between a person's gender and their occupation.
### Other Known Limitations
Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
## Additional Information
### Dataset Curators
The annotation of the Spanish data was funded by the European Commission through the NAMIC project (IST-1999-12392).
### Licensing Information
The licensing status of the data, especially the news source text, is unknown.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@inproceedings{tjong-kim-sang-2002-introduction,
title = "Introduction to the {C}o{NLL}-2002 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F.",
booktitle = "{COLING}-02: The 6th Conference on Natural Language Learning 2002 ({C}o{NLL}-2002)",
year = "2002",
url = "https://www.aclweb.org/anthology/W02-2024",
}
```
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
tomaarsen/conll2002
|
[
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:es",
"language:nl",
"license:unknown",
"region:us"
] |
2023-09-23T09:04:25+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["es", "nl"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition", "part-of-speech"], "paperswithcode_id": "conll-2002", "pretty_name": "CoNLL-2002", "config_names": ["es", "nl"], "dataset_info": [{"config_name": "es", "features": [{"name": "id", "dtype": "string"}, {"name": "document_id", "dtype": "int32"}, {"name": "sentence_id", "dtype": "int32"}, {"name": "tokens", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "AO", "1": "AQ", "2": "CC", "3": "CS", "4": "DA", "5": "DE", "6": "DD", "7": "DI", "8": "DN", "9": "DP", "10": "DT", "11": "Faa", "12": "Fat", "13": "Fc", "14": "Fd", "15": "Fe", "16": "Fg", "17": "Fh", "18": "Fia", "19": "Fit", "20": "Fp", "21": "Fpa", "22": "Fpt", "23": "Fs", "24": "Ft", "25": "Fx", "26": "Fz", "27": "I", "28": "NC", "29": "NP", "30": "P0", "31": "PD", "32": "PI", "33": "PN", "34": "PP", "35": "PR", "36": "PT", "37": "PX", "38": "RG", "39": "RN", "40": "SP", "41": "VAI", "42": "VAM", "43": "VAN", "44": "VAP", "45": "VAS", "46": "VMG", "47": "VMI", "48": "VMM", "49": "VMN", "50": "VMP", "51": "VMS", "52": "VSG", "53": "VSI", "54": "VSM", "55": "VSN", "56": "VSP", "57": "VSS", "58": "Y", "59": "Z"}}}}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC", "7": "B-MISC", "8": "I-MISC"}}}}], "splits": [{"name": "train", "num_bytes": 6738717, "num_examples": 8323}, {"name": "validation", "num_bytes": 1349064, "num_examples": 1915}, {"name": "test", "num_bytes": 1306252, "num_examples": 1517}], "download_size": 4140690, "dataset_size": 9394033}, {"config_name": "nl", "features": [{"name": "id", "dtype": "string"}, {"name": "document_id", "dtype": "int32"}, {"name": "sentence_id", "dtype": "int32"}, {"name": "tokens", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "Adj", "1": "Adv", "2": "Art", "3": "Conj", "4": "Int", "5": "Misc", "6": "N", "7": "Num", "8": "Prep", "9": "Pron", "10": "Punc", "11": "V"}}}}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC", "7": "B-MISC", "8": "I-MISC"}}}}], "splits": [{"name": "train", "num_bytes": 5435346, "num_examples": 15806}, {"name": "validation", "num_bytes": 1017418, "num_examples": 2895}, {"name": "test", "num_bytes": 1850382, "num_examples": 5195}], "download_size": 3642241, "dataset_size": 8303146}]}
|
2023-09-23T09:53:11+00:00
|
[] |
[
"es",
"nl"
] |
TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-crowdsourced #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Spanish #language-Dutch #license-unknown #region-us
|
Dataset Card for CoNLL-2002
===========================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: homepage
* Repository: github
* Paper: paper
* Point of Contact: Erik Tjong Kim Sang
### Dataset Summary
Named entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:
[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
The shared task of CoNLL-2002 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for at least two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. Information sources other than the training data may be used in this shared task. We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).
### Supported Tasks and Leaderboards
Named Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.
After 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.
* 'named-entity-recognition': The performance in this task is measured with F1 (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.
* 'parsing': The performance in this task is measured with F1 (higher is better). A part-of-speech tag is correct only if it is equal to the corresponding tag in the data.
### Languages
There are two languages available : Spanish (es) and Dutch (nl).
Dataset Structure
-----------------
### Data Instances
The examples look like this :
The original data files within the Dutch sub-dataset have '-DOCSTART-' lines used to separate documents, but these lines are removed here.
Indeed '-DOCSTART-' is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
### Data Fields
* 'id': id of the sample
* 'document\_id': an 'int32' feature tracking which document the sample is from.
* 'sentence\_id': an 'int32' feature tracking which sentence in this document the sample is from.
* 'tokens': the tokens of the example text
* 'ner\_tags': the NER tags of each token
* 'pos\_tags': the POS tags of each token
The POS tags correspond to this list for Spanish:
And this list for Dutch:
The NER tags correspond to this list:
The NER tags have the same format as in the chunking task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC).
It is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked.
### Data Splits
For both configurations (Spanish and Dutch), there are three splits.
The original splits were named 'train', 'testa' and 'testb' and they correspond to the 'train', 'validation' and 'test' splits.
The splits have the following sizes :
Dataset Creation
----------------
### Curation Rationale
The dataset was introduced to introduce new resources to two languages that were under-served for statistical machine learning at the time, Dutch and Spanish.
### Source Data
The Spanish data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.
The Dutch data consist of four editions of the Belgian newspaper "De Morgen" of 2000 (June 2, July 1, August 1 and September 1).
#### Initial Data Collection and Normalization
The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.
#### Who are the source language producers?
The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.
### Annotations
#### Annotation process
For the Dutch data, the annotator has followed the MITRE and SAIC guidelines for named entity recognition (Chinchor et al., 1999) as well as possible.
#### Who are the annotators?
The Spanish data annotation was carried out by the TALP Research Center of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC) of the University of Barcelona (UB).
The Dutch data was annotated as a part of the Atranos project at the University of Antwerp.
### Personal and Sensitive Information
The data is sourced from newspaper source and only contains mentions of public figures or individuals
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
Named Entity Recognition systems can be used to efficiently index news text, allowing to easily gather all information pertaining to an organization or individual. Making such resources widely available in languages other than English can support better research and user experience for a larger part of the world's population. At the same time, better indexing and discoverability can also enable surveillance by state actors.
### Discussion of Biases
News text reproduces the biases of society, and any system trained on news data should be cognizant of these limitations and the risk for models to learn spurious correlations in this context, for example between a person's gender and their occupation.
### Other Known Limitations
Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
Additional Information
----------------------
### Dataset Curators
The annotation of the Spanish data was funded by the European Commission through the NAMIC project (IST-1999-12392).
### Licensing Information
The licensing status of the data, especially the news source text, is unknown.
Provide the BibTex-formatted reference for the dataset. For example:
### Contributions
Thanks to @lhoestq for adding this dataset.
|
[
"### Dataset Summary\n\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:\n\n\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\n\n\nThe shared task of CoNLL-2002 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for at least two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. Information sources other than the training data may be used in this shared task. We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).",
"### Supported Tasks and Leaderboards\n\n\nNamed Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.\n\n\nAfter 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.\n\n\n* 'named-entity-recognition': The performance in this task is measured with F1 (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.\n* 'parsing': The performance in this task is measured with F1 (higher is better). A part-of-speech tag is correct only if it is equal to the corresponding tag in the data.",
"### Languages\n\n\nThere are two languages available : Spanish (es) and Dutch (nl).\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nThe examples look like this :\n\n\nThe original data files within the Dutch sub-dataset have '-DOCSTART-' lines used to separate documents, but these lines are removed here.\nIndeed '-DOCSTART-' is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.",
"### Data Fields\n\n\n* 'id': id of the sample\n* 'document\\_id': an 'int32' feature tracking which document the sample is from.\n* 'sentence\\_id': an 'int32' feature tracking which sentence in this document the sample is from.\n* 'tokens': the tokens of the example text\n* 'ner\\_tags': the NER tags of each token\n* 'pos\\_tags': the POS tags of each token\n\n\nThe POS tags correspond to this list for Spanish:\n\n\nAnd this list for Dutch:\n\n\nThe NER tags correspond to this list:\n\n\nThe NER tags have the same format as in the chunking task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC).\n\n\nIt is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked.",
"### Data Splits\n\n\nFor both configurations (Spanish and Dutch), there are three splits.\n\n\nThe original splits were named 'train', 'testa' and 'testb' and they correspond to the 'train', 'validation' and 'test' splits.\n\n\nThe splits have the following sizes :\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe dataset was introduced to introduce new resources to two languages that were under-served for statistical machine learning at the time, Dutch and Spanish.",
"### Source Data\n\n\nThe Spanish data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.\n\n\nThe Dutch data consist of four editions of the Belgian newspaper \"De Morgen\" of 2000 (June 2, July 1, August 1 and September 1).",
"#### Initial Data Collection and Normalization\n\n\nThe articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.",
"#### Who are the source language producers?\n\n\nThe source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.",
"### Annotations",
"#### Annotation process\n\n\nFor the Dutch data, the annotator has followed the MITRE and SAIC guidelines for named entity recognition (Chinchor et al., 1999) as well as possible.",
"#### Who are the annotators?\n\n\nThe Spanish data annotation was carried out by the TALP Research Center of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC) of the University of Barcelona (UB).\n\n\nThe Dutch data was annotated as a part of the Atranos project at the University of Antwerp.",
"### Personal and Sensitive Information\n\n\nThe data is sourced from newspaper source and only contains mentions of public figures or individuals\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nNamed Entity Recognition systems can be used to efficiently index news text, allowing to easily gather all information pertaining to an organization or individual. Making such resources widely available in languages other than English can support better research and user experience for a larger part of the world's population. At the same time, better indexing and discoverability can also enable surveillance by state actors.",
"### Discussion of Biases\n\n\nNews text reproduces the biases of society, and any system trained on news data should be cognizant of these limitations and the risk for models to learn spurious correlations in this context, for example between a person's gender and their occupation.",
"### Other Known Limitations\n\n\nUsers should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe annotation of the Spanish data was funded by the European Commission through the NAMIC project (IST-1999-12392).",
"### Licensing Information\n\n\nThe licensing status of the data, especially the news source text, is unknown.\n\n\nProvide the BibTex-formatted reference for the dataset. For example:",
"### Contributions\n\n\nThanks to @lhoestq for adding this dataset."
] |
[
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-crowdsourced #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Spanish #language-Dutch #license-unknown #region-us \n",
"### Dataset Summary\n\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:\n\n\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\n\n\nThe shared task of CoNLL-2002 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for at least two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. Information sources other than the training data may be used in this shared task. We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).",
"### Supported Tasks and Leaderboards\n\n\nNamed Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.\n\n\nAfter 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.\n\n\n* 'named-entity-recognition': The performance in this task is measured with F1 (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.\n* 'parsing': The performance in this task is measured with F1 (higher is better). A part-of-speech tag is correct only if it is equal to the corresponding tag in the data.",
"### Languages\n\n\nThere are two languages available : Spanish (es) and Dutch (nl).\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nThe examples look like this :\n\n\nThe original data files within the Dutch sub-dataset have '-DOCSTART-' lines used to separate documents, but these lines are removed here.\nIndeed '-DOCSTART-' is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.",
"### Data Fields\n\n\n* 'id': id of the sample\n* 'document\\_id': an 'int32' feature tracking which document the sample is from.\n* 'sentence\\_id': an 'int32' feature tracking which sentence in this document the sample is from.\n* 'tokens': the tokens of the example text\n* 'ner\\_tags': the NER tags of each token\n* 'pos\\_tags': the POS tags of each token\n\n\nThe POS tags correspond to this list for Spanish:\n\n\nAnd this list for Dutch:\n\n\nThe NER tags correspond to this list:\n\n\nThe NER tags have the same format as in the chunking task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC).\n\n\nIt is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked.",
"### Data Splits\n\n\nFor both configurations (Spanish and Dutch), there are three splits.\n\n\nThe original splits were named 'train', 'testa' and 'testb' and they correspond to the 'train', 'validation' and 'test' splits.\n\n\nThe splits have the following sizes :\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe dataset was introduced to introduce new resources to two languages that were under-served for statistical machine learning at the time, Dutch and Spanish.",
"### Source Data\n\n\nThe Spanish data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.\n\n\nThe Dutch data consist of four editions of the Belgian newspaper \"De Morgen\" of 2000 (June 2, July 1, August 1 and September 1).",
"#### Initial Data Collection and Normalization\n\n\nThe articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.",
"#### Who are the source language producers?\n\n\nThe source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.",
"### Annotations",
"#### Annotation process\n\n\nFor the Dutch data, the annotator has followed the MITRE and SAIC guidelines for named entity recognition (Chinchor et al., 1999) as well as possible.",
"#### Who are the annotators?\n\n\nThe Spanish data annotation was carried out by the TALP Research Center of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC) of the University of Barcelona (UB).\n\n\nThe Dutch data was annotated as a part of the Atranos project at the University of Antwerp.",
"### Personal and Sensitive Information\n\n\nThe data is sourced from newspaper source and only contains mentions of public figures or individuals\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nNamed Entity Recognition systems can be used to efficiently index news text, allowing to easily gather all information pertaining to an organization or individual. Making such resources widely available in languages other than English can support better research and user experience for a larger part of the world's population. At the same time, better indexing and discoverability can also enable surveillance by state actors.",
"### Discussion of Biases\n\n\nNews text reproduces the biases of society, and any system trained on news data should be cognizant of these limitations and the risk for models to learn spurious correlations in this context, for example between a person's gender and their occupation.",
"### Other Known Limitations\n\n\nUsers should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe annotation of the Spanish data was funded by the European Commission through the NAMIC project (IST-1999-12392).",
"### Licensing Information\n\n\nThe licensing status of the data, especially the news source text, is unknown.\n\n\nProvide the BibTex-formatted reference for the dataset. For example:",
"### Contributions\n\n\nThanks to @lhoestq for adding this dataset."
] |
[
112,
221,
383,
27,
83,
253,
79,
40,
64,
35,
33,
5,
44,
79,
38,
97,
65,
45,
32,
44,
17
] |
[
"passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-crowdsourced #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Spanish #language-Dutch #license-unknown #region-us \n### Dataset Summary\n\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:\n\n\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\n\n\nThe shared task of CoNLL-2002 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for at least two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. Information sources other than the training data may be used in this shared task. We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).",
"passage: ### Supported Tasks and Leaderboards\n\n\nNamed Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.\n\n\nAfter 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.\n\n\n* 'named-entity-recognition': The performance in this task is measured with F1 (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.\n* 'parsing': The performance in this task is measured with F1 (higher is better). A part-of-speech tag is correct only if it is equal to the corresponding tag in the data.### Languages\n\n\nThere are two languages available : Spanish (es) and Dutch (nl).\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThe examples look like this :\n\n\nThe original data files within the Dutch sub-dataset have '-DOCSTART-' lines used to separate documents, but these lines are removed here.\nIndeed '-DOCSTART-' is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.### Data Fields\n\n\n* 'id': id of the sample\n* 'document\\_id': an 'int32' feature tracking which document the sample is from.\n* 'sentence\\_id': an 'int32' feature tracking which sentence in this document the sample is from.\n* 'tokens': the tokens of the example text\n* 'ner\\_tags': the NER tags of each token\n* 'pos\\_tags': the POS tags of each token\n\n\nThe POS tags correspond to this list for Spanish:\n\n\nAnd this list for Dutch:\n\n\nThe NER tags correspond to this list:\n\n\nThe NER tags have the same format as in the chunking task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC).\n\n\nIt is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked.### Data Splits\n\n\nFor both configurations (Spanish and Dutch), there are three splits.\n\n\nThe original splits were named 'train', 'testa' and 'testb' and they correspond to the 'train', 'validation' and 'test' splits.\n\n\nThe splits have the following sizes :\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe dataset was introduced to introduce new resources to two languages that were under-served for statistical machine learning at the time, Dutch and Spanish.",
"passage: ### Source Data\n\n\nThe Spanish data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.\n\n\nThe Dutch data consist of four editions of the Belgian newspaper \"De Morgen\" of 2000 (June 2, July 1, August 1 and September 1).#### Initial Data Collection and Normalization\n\n\nThe articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.#### Who are the source language producers?\n\n\nThe source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.### Annotations#### Annotation process\n\n\nFor the Dutch data, the annotator has followed the MITRE and SAIC guidelines for named entity recognition (Chinchor et al., 1999) as well as possible.#### Who are the annotators?\n\n\nThe Spanish data annotation was carried out by the TALP Research Center of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC) of the University of Barcelona (UB).\n\n\nThe Dutch data was annotated as a part of the Atranos project at the University of Antwerp.### Personal and Sensitive Information\n\n\nThe data is sourced from newspaper source and only contains mentions of public figures or individuals\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\nNamed Entity Recognition systems can be used to efficiently index news text, allowing to easily gather all information pertaining to an organization or individual. Making such resources widely available in languages other than English can support better research and user experience for a larger part of the world's population. At the same time, better indexing and discoverability can also enable surveillance by state actors.### Discussion of Biases\n\n\nNews text reproduces the biases of society, and any system trained on news data should be cognizant of these limitations and the risk for models to learn spurious correlations in this context, for example between a person's gender and their occupation.### Other Known Limitations\n\n\nUsers should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nThe annotation of the Spanish data was funded by the European Commission through the NAMIC project (IST-1999-12392)."
] |
c0f7e35b43703fee42f3b52d1e4064af6043ea44
|
# Dataset Card for "donut_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sankettgorey/donut_1
|
[
"region:us"
] |
2023-09-23T09:14:33+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 111324828.0, "num_examples": 201}], "download_size": 80121300, "dataset_size": 111324828.0}}
|
2023-09-23T09:34:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "donut_1"
More Information needed
|
[
"# Dataset Card for \"donut_1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"donut_1\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"donut_1\"\n\nMore Information needed"
] |
bdfc29e4435f62485ee0088988124392a0e72d30
|
# Helix Dataset for Questioning and Instructing (QI)
## Description
The Helix dataset is a specialized collection of data tailored for Questioning and Instructing (QI) tasks. It is created by merging all the Airoboros datasets and incorporating one RosettaCode dataset, with a primary focus on supporting QI research and applications.
## Dataset Details
- **Source Datasets**: Airoboros datasets (various sources), RosettaCode dataset
- **Merging Script**: The merging of these datasets was performed using the `bowie.py` script, which is included in this repository. The script facilitates the formatting and integration of the datasets to create the Helix dataset optimized for QI tasks.
## Usage
The Helix dataset is particularly suited for researchers and developers working on QI tasks, including:
- Developing QI systems that can understand and respond to natural language queries and instructions.
- Training and evaluating machine learning models for QI applications.
- Benchmarking QI algorithms and techniques.
- Investigating the intersection of natural language understanding and instructional responses.
## License
Please refer to the individual licenses of the source datasets for specific licensing information. Ensure compliance with the respective licenses when using the Helix dataset.
## Citation
If you use the Helix dataset for QI research or projects, please consider citing it using the appropriate citation format for each of the source datasets and the `bowie.py` script.
```
Marcus. 2023. Helix Dataset for Questioning and Instructing (QI). Helix. Self-published. https://huggingface.co/datasets/KaleidoSG/Helix
```
## Acknowledgments
We express our gratitude to the creators and maintainers of the Airoboros datasets and the RosettaCode dataset for their valuable contributions to this specialized dataset for Questioning and Instructing (QI) tasks.
|
KaleidoSG/Helix
|
[
"task_categories:question-answering",
"task_categories:translation",
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"code",
"airoboros",
"language",
"merge",
"gpt",
"region:us"
] |
2023-09-23T09:30:52+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["question-answering", "translation", "summarization", "text-generation", "conversational"], "pretty_name": "helix", "tags": ["code", "airoboros", "language", "merge", "gpt"]}
|
2023-09-23T13:24:23+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-question-answering #task_categories-translation #task_categories-summarization #task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #language-English #license-cc-by-4.0 #code #airoboros #language #merge #gpt #region-us
|
# Helix Dataset for Questioning and Instructing (QI)
## Description
The Helix dataset is a specialized collection of data tailored for Questioning and Instructing (QI) tasks. It is created by merging all the Airoboros datasets and incorporating one RosettaCode dataset, with a primary focus on supporting QI research and applications.
## Dataset Details
- Source Datasets: Airoboros datasets (various sources), RosettaCode dataset
- Merging Script: The merging of these datasets was performed using the 'URL' script, which is included in this repository. The script facilitates the formatting and integration of the datasets to create the Helix dataset optimized for QI tasks.
## Usage
The Helix dataset is particularly suited for researchers and developers working on QI tasks, including:
- Developing QI systems that can understand and respond to natural language queries and instructions.
- Training and evaluating machine learning models for QI applications.
- Benchmarking QI algorithms and techniques.
- Investigating the intersection of natural language understanding and instructional responses.
## License
Please refer to the individual licenses of the source datasets for specific licensing information. Ensure compliance with the respective licenses when using the Helix dataset.
If you use the Helix dataset for QI research or projects, please consider citing it using the appropriate citation format for each of the source datasets and the 'URL' script.
## Acknowledgments
We express our gratitude to the creators and maintainers of the Airoboros datasets and the RosettaCode dataset for their valuable contributions to this specialized dataset for Questioning and Instructing (QI) tasks.
|
[
"# Helix Dataset for Questioning and Instructing (QI)",
"## Description\nThe Helix dataset is a specialized collection of data tailored for Questioning and Instructing (QI) tasks. It is created by merging all the Airoboros datasets and incorporating one RosettaCode dataset, with a primary focus on supporting QI research and applications.",
"## Dataset Details\n- Source Datasets: Airoboros datasets (various sources), RosettaCode dataset\n- Merging Script: The merging of these datasets was performed using the 'URL' script, which is included in this repository. The script facilitates the formatting and integration of the datasets to create the Helix dataset optimized for QI tasks.",
"## Usage\nThe Helix dataset is particularly suited for researchers and developers working on QI tasks, including:\n- Developing QI systems that can understand and respond to natural language queries and instructions.\n- Training and evaluating machine learning models for QI applications.\n- Benchmarking QI algorithms and techniques.\n- Investigating the intersection of natural language understanding and instructional responses.",
"## License\nPlease refer to the individual licenses of the source datasets for specific licensing information. Ensure compliance with the respective licenses when using the Helix dataset.\n\nIf you use the Helix dataset for QI research or projects, please consider citing it using the appropriate citation format for each of the source datasets and the 'URL' script.",
"## Acknowledgments\nWe express our gratitude to the creators and maintainers of the Airoboros datasets and the RosettaCode dataset for their valuable contributions to this specialized dataset for Questioning and Instructing (QI) tasks."
] |
[
"TAGS\n#task_categories-question-answering #task_categories-translation #task_categories-summarization #task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #language-English #license-cc-by-4.0 #code #airoboros #language #merge #gpt #region-us \n",
"# Helix Dataset for Questioning and Instructing (QI)",
"## Description\nThe Helix dataset is a specialized collection of data tailored for Questioning and Instructing (QI) tasks. It is created by merging all the Airoboros datasets and incorporating one RosettaCode dataset, with a primary focus on supporting QI research and applications.",
"## Dataset Details\n- Source Datasets: Airoboros datasets (various sources), RosettaCode dataset\n- Merging Script: The merging of these datasets was performed using the 'URL' script, which is included in this repository. The script facilitates the formatting and integration of the datasets to create the Helix dataset optimized for QI tasks.",
"## Usage\nThe Helix dataset is particularly suited for researchers and developers working on QI tasks, including:\n- Developing QI systems that can understand and respond to natural language queries and instructions.\n- Training and evaluating machine learning models for QI applications.\n- Benchmarking QI algorithms and techniques.\n- Investigating the intersection of natural language understanding and instructional responses.",
"## License\nPlease refer to the individual licenses of the source datasets for specific licensing information. Ensure compliance with the respective licenses when using the Helix dataset.\n\nIf you use the Helix dataset for QI research or projects, please consider citing it using the appropriate citation format for each of the source datasets and the 'URL' script.",
"## Acknowledgments\nWe express our gratitude to the creators and maintainers of the Airoboros datasets and the RosettaCode dataset for their valuable contributions to this specialized dataset for Questioning and Instructing (QI) tasks."
] |
[
97,
15,
67,
87,
89,
79,
57
] |
[
"passage: TAGS\n#task_categories-question-answering #task_categories-translation #task_categories-summarization #task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #language-English #license-cc-by-4.0 #code #airoboros #language #merge #gpt #region-us \n# Helix Dataset for Questioning and Instructing (QI)## Description\nThe Helix dataset is a specialized collection of data tailored for Questioning and Instructing (QI) tasks. It is created by merging all the Airoboros datasets and incorporating one RosettaCode dataset, with a primary focus on supporting QI research and applications.## Dataset Details\n- Source Datasets: Airoboros datasets (various sources), RosettaCode dataset\n- Merging Script: The merging of these datasets was performed using the 'URL' script, which is included in this repository. The script facilitates the formatting and integration of the datasets to create the Helix dataset optimized for QI tasks.## Usage\nThe Helix dataset is particularly suited for researchers and developers working on QI tasks, including:\n- Developing QI systems that can understand and respond to natural language queries and instructions.\n- Training and evaluating machine learning models for QI applications.\n- Benchmarking QI algorithms and techniques.\n- Investigating the intersection of natural language understanding and instructional responses.## License\nPlease refer to the individual licenses of the source datasets for specific licensing information. Ensure compliance with the respective licenses when using the Helix dataset.\n\nIf you use the Helix dataset for QI research or projects, please consider citing it using the appropriate citation format for each of the source datasets and the 'URL' script.## Acknowledgments\nWe express our gratitude to the creators and maintainers of the Airoboros datasets and the RosettaCode dataset for their valuable contributions to this specialized dataset for Questioning and Instructing (QI) tasks."
] |
7c80ab2d661412021edad54519c0553890d7fb5f
|
# Dataset Card for "donut_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sankettgorey/donut_2
|
[
"region:us"
] |
2023-09-23T09:38:12+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 186356508.0, "num_examples": 601}], "download_size": 145287831, "dataset_size": 186356508.0}}
|
2023-09-23T09:38:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "donut_2"
More Information needed
|
[
"# Dataset Card for \"donut_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"donut_2\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"donut_2\"\n\nMore Information needed"
] |
8e19719202c5b55e5bfd88c7ea9623d61fd7db75
|
# Dataset Card for Evaluation run of Corianas/Quokka_256m
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Corianas/Quokka_256m
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Corianas/Quokka_256m](https://huggingface.co/Corianas/Quokka_256m) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Corianas__Quokka_256m",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-23T10:43:58.208940](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__Quokka_256m/blob/main/results_2023-09-23T10-43-58.208940.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.003984899328859061,
"em_stderr": 0.0006451805848102272,
"f1": 0.04266883389261752,
"f1_stderr": 0.0013952300953918367,
"acc": 0.2612470402525651,
"acc_stderr": 0.007019128912029941
},
"harness|drop|3": {
"em": 0.003984899328859061,
"em_stderr": 0.0006451805848102272,
"f1": 0.04266883389261752,
"f1_stderr": 0.0013952300953918367
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5224940805051302,
"acc_stderr": 0.014038257824059881
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Corianas__Quokka_256m
|
[
"region:us"
] |
2023-09-23T09:44:01+00:00
|
{"pretty_name": "Evaluation run of Corianas/Quokka_256m", "dataset_summary": "Dataset automatically created during the evaluation run of model [Corianas/Quokka_256m](https://huggingface.co/Corianas/Quokka_256m) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Corianas__Quokka_256m\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-23T10:43:58.208940](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__Quokka_256m/blob/main/results_2023-09-23T10-43-58.208940.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.003984899328859061,\n \"em_stderr\": 0.0006451805848102272,\n \"f1\": 0.04266883389261752,\n \"f1_stderr\": 0.0013952300953918367,\n \"acc\": 0.2612470402525651,\n \"acc_stderr\": 0.007019128912029941\n },\n \"harness|drop|3\": {\n \"em\": 0.003984899328859061,\n \"em_stderr\": 0.0006451805848102272,\n \"f1\": 0.04266883389261752,\n \"f1_stderr\": 0.0013952300953918367\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5224940805051302,\n \"acc_stderr\": 0.014038257824059881\n }\n}\n```", "repo_url": "https://huggingface.co/Corianas/Quokka_256m", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_23T10_43_58.208940", "path": ["**/details_harness|drop|3_2023-09-23T10-43-58.208940.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-23T10-43-58.208940.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_23T10_43_58.208940", "path": ["**/details_harness|gsm8k|5_2023-09-23T10-43-58.208940.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-23T10-43-58.208940.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_23T10_43_58.208940", "path": ["**/details_harness|winogrande|5_2023-09-23T10-43-58.208940.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-23T10-43-58.208940.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_23T10_43_58.208940", "path": ["results_2023-09-23T10-43-58.208940.parquet"]}, {"split": "latest", "path": ["results_2023-09-23T10-43-58.208940.parquet"]}]}]}
|
2023-09-23T09:44:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Corianas/Quokka_256m
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Corianas/Quokka_256m on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-23T10:43:58.208940(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Corianas/Quokka_256m",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Corianas/Quokka_256m on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-23T10:43:58.208940(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Corianas/Quokka_256m",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Corianas/Quokka_256m on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-23T10:43:58.208940(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
19,
31,
167,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Corianas/Quokka_256m## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Corianas/Quokka_256m on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-23T10:43:58.208940(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
18c8b1fcbfa88ef69c527661c0a6c05d66a88f1b
|
# Dataset Card for "sudo-floor-plan-12k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zimhe/sudo-floor-plan-12k
|
[
"region:us"
] |
2023-09-23T09:44:37+00:00
|
{"dataset_info": {"features": [{"name": "indices", "dtype": "string"}, {"name": "plans", "dtype": "image"}, {"name": "walls", "dtype": "image"}, {"name": "colors", "dtype": "image"}, {"name": "footprints", "dtype": "image"}, {"name": "plan_captions", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3999080609.0, "num_examples": 12000}], "download_size": 2497201625, "dataset_size": 3999080609.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-23T12:43:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "sudo-floor-plan-12k"
More Information needed
|
[
"# Dataset Card for \"sudo-floor-plan-12k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"sudo-floor-plan-12k\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"sudo-floor-plan-12k\"\n\nMore Information needed"
] |
ddc8a60158052c3c58e536a5a81054e6ac2d37b1
|
# Dataset Card for "shp-generated_flan_t5_large_flan_t5_large_zeroshot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dongyoung4091/shp-generated_flan_t5_large_flan_t5_large_zeroshot
|
[
"region:us"
] |
2023-09-23T09:59:21+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "zeroshot_helpfulness", "dtype": "float64"}, {"name": "zeroshot_specificity", "dtype": "float64"}, {"name": "zeroshot_intent", "dtype": "float64"}, {"name": "zeroshot_factuality", "dtype": "float64"}, {"name": "zeroshot_easy-to-understand", "dtype": "float64"}, {"name": "zeroshot_relevance", "dtype": "float64"}, {"name": "zeroshot_readability", "dtype": "float64"}, {"name": "zeroshot_enough-detail", "dtype": "float64"}, {"name": "zeroshot_biased:", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-individual-preferences", "dtype": "float64"}, {"name": "zeroshot_repetetive", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-context", "dtype": "float64"}, {"name": "zeroshot_too-long", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 29493865, "num_examples": 25600}], "download_size": 1905432, "dataset_size": 29493865}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-23T09:59:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "shp-generated_flan_t5_large_flan_t5_large_zeroshot"
More Information needed
|
[
"# Dataset Card for \"shp-generated_flan_t5_large_flan_t5_large_zeroshot\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"shp-generated_flan_t5_large_flan_t5_large_zeroshot\"\n\nMore Information needed"
] |
[
6,
34
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"shp-generated_flan_t5_large_flan_t5_large_zeroshot\"\n\nMore Information needed"
] |
3109e76c73c85cb12c91c5b4a9009f2fc9845e50
|
# Dataset Card for "test_01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ossaili/test_01
|
[
"region:us"
] |
2023-09-23T10:02:41+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 102096.0, "num_examples": 1}], "download_size": 103703, "dataset_size": 102096.0}}
|
2023-09-23T10:03:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "test_01"
More Information needed
|
[
"# Dataset Card for \"test_01\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"test_01\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"test_01\"\n\nMore Information needed"
] |
4899b2bc26f23d878aaae6a7ad2808f5666c1f27
|
# Scenery of japan.
This is a dataset to train text-to-image or other models without any copyright issue.
All materials used in this dataset are CC0 (Public domain /P.D.).
## Dataset Description
- **Homepage:**
- https://www.deviantart.com/japanmaterial
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
JapanDegitalMaterial/Scenery_of_japan
|
[
"task_categories:text-to-image",
"language:en",
"language:ja",
"license:cc0-1.0",
"region:us"
] |
2023-09-23T10:08:44+00:00
|
{"language": ["en", "ja"], "license": "cc0-1.0", "task_categories": ["text-to-image"]}
|
2023-09-23T13:32:48+00:00
|
[] |
[
"en",
"ja"
] |
TAGS
#task_categories-text-to-image #language-English #language-Japanese #license-cc0-1.0 #region-us
|
# Scenery of japan.
This is a dataset to train text-to-image or other models without any copyright issue.
All materials used in this dataset are CC0 (Public domain /P.D.).
## Dataset Description
- Homepage:
- URL
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Scenery of japan.\n\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).",
"## Dataset Description\n\n- Homepage:\n- URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-text-to-image #language-English #language-Japanese #license-cc0-1.0 #region-us \n",
"# Scenery of japan.\n\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).",
"## Dataset Description\n\n- Homepage:\n- URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
36,
45,
26,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-text-to-image #language-English #language-Japanese #license-cc0-1.0 #region-us \n# Scenery of japan.\n\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).## Dataset Description\n\n- Homepage:\n- URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
ffb9c697e1d084f29bf72c0cc5e4931d702ffc21
|
# Dataset Card for "research_rnn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
adhok/research_rnn
|
[
"region:us"
] |
2023-09-23T10:18:14+00:00
|
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 89896, "num_examples": 282}], "download_size": 29788, "dataset_size": 89896}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-23T10:19:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "research_rnn"
More Information needed
|
[
"# Dataset Card for \"research_rnn\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"research_rnn\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"research_rnn\"\n\nMore Information needed"
] |
163571ae23185a4725357c5645b9c1d0e0942150
|
# Dataset Card for "One-Piece-anime-captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ayoubkirouane/One-Piece-anime-captions
|
[
"region:us"
] |
2023-09-23T10:19:49+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28504098.0, "num_examples": 856}], "download_size": 28452041, "dataset_size": 28504098.0}}
|
2023-09-23T10:20:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "One-Piece-anime-captions"
More Information needed
|
[
"# Dataset Card for \"One-Piece-anime-captions\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"One-Piece-anime-captions\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"One-Piece-anime-captions\"\n\nMore Information needed"
] |
be7be5ecffa144b2d100af9da19cf1bc0d18c662
|
# Dataset Card for "ar-higher"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
taldarim/ar-higher
|
[
"region:us"
] |
2023-09-23T10:21:48+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "Comprehension", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Configuration", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Crashes", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Implementation", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Performance issue", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Results interpretation", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 373318, "num_examples": 280}, {"name": "test", "num_bytes": 369328, "num_examples": 236}], "download_size": 186867, "dataset_size": 742646}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
|
2023-09-23T10:23:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ar-higher"
More Information needed
|
[
"# Dataset Card for \"ar-higher\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ar-higher\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ar-higher\"\n\nMore Information needed"
] |
6fb6ce14f51023ce9ca2205d2f2de8d511ef2428
|
# Dataset Card for "claim_detection_training_set"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nikchar/claim_detection_training_set
|
[
"region:us"
] |
2023-09-23T10:30:10+00:00
|
{"dataset_info": {"features": [{"name": "claim", "dtype": "string"}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4017668, "num_examples": 38514}], "download_size": 2754089, "dataset_size": 4017668}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-23T10:30:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "claim_detection_training_set"
More Information needed
|
[
"# Dataset Card for \"claim_detection_training_set\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"claim_detection_training_set\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"claim_detection_training_set\"\n\nMore Information needed"
] |
aa49f92a8667f5a704ff576c728765c236940c6c
|
This is the open-sourced dataset of our EMNLP 2023 paper **COCO: Coherence-Enhanced Machine-Generated Text Detection Under Low Resource With Contrastive Learning** https://arxiv.org/abs/2212.10341 by XJTU.
If you have any problem using it, please feel free to contact us!
A more detailed description is on the way...
|
ZachW/MGTDetect_CoCo
|
[
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"arxiv:2212.10341",
"region:us"
] |
2023-09-23T10:34:11+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "pretty_name": "CoCo_MGT_Detection"}
|
2023-10-18T17:28:47+00:00
|
[
"2212.10341"
] |
[
"en"
] |
TAGS
#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #arxiv-2212.10341 #region-us
|
This is the open-sourced dataset of our EMNLP 2023 paper COCO: Coherence-Enhanced Machine-Generated Text Detection Under Low Resource With Contrastive Learning URL by XJTU.
If you have any problem using it, please feel free to contact us!
A more detailed description is on the way...
|
[] |
[
"TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #arxiv-2212.10341 #region-us \n"
] |
[
47
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #arxiv-2212.10341 #region-us \n"
] |
d9fa1690ea19b70fcd79e131479a34a9fd2fb9ea
|
# Dataset Card for "instructionPairedFormularDataset13kPreProcessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
crewdon/instructionPairedFormularDataset13kPreProcessed
|
[
"region:us"
] |
2023-09-23T11:06:09+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21215159, "num_examples": 13655}], "download_size": 2794474, "dataset_size": 21215159}}
|
2023-09-23T11:06:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "instructionPairedFormularDataset13kPreProcessed"
More Information needed
|
[
"# Dataset Card for \"instructionPairedFormularDataset13kPreProcessed\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"instructionPairedFormularDataset13kPreProcessed\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"instructionPairedFormularDataset13kPreProcessed\"\n\nMore Information needed"
] |
76611f0d49215f43fe2047a7f414b40f7d5bcd03
|
# Textuer images
This is a dataset to train text-to-image or other models without any copyright issue.
All materials used in this dataset are CC0 (Public domain /P.D.).
## Dataset Description
- **Homepage:**
- https://www.deviantart.com/japanmaterial
-
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
JapanDegitalMaterial/Texture_images
|
[
"task_categories:text-to-image",
"language:en",
"language:ja",
"license:cc0-1.0",
"region:us"
] |
2023-09-23T11:16:01+00:00
|
{"language": ["en", "ja"], "license": "cc0-1.0", "task_categories": ["text-to-image"]}
|
2023-09-23T13:05:23+00:00
|
[] |
[
"en",
"ja"
] |
TAGS
#task_categories-text-to-image #language-English #language-Japanese #license-cc0-1.0 #region-us
|
# Textuer images
This is a dataset to train text-to-image or other models without any copyright issue.
All materials used in this dataset are CC0 (Public domain /P.D.).
## Dataset Description
- Homepage:
- URL
-
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Textuer images\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).",
"## Dataset Description\n\n- Homepage:\n- URL\n- \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-text-to-image #language-English #language-Japanese #license-cc0-1.0 #region-us \n",
"# Textuer images\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).",
"## Dataset Description\n\n- Homepage:\n- URL\n- \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
36,
43,
27,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-text-to-image #language-English #language-Japanese #license-cc0-1.0 #region-us \n# Textuer images\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).## Dataset Description\n\n- Homepage:\n- URL\n- \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
4e99ec4d126c3db49fd8953200eaa1d189ee8f09
|
# Objects in japan.
This is a dataset to train text-to-image or other models without any copyright issue.
All materials used in this dataset are CC0 (Public domain /P.D.).
## Dataset Description
- **Homepage:**
- https://www.deviantart.com/japanmaterial
-
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
JapanDegitalMaterial/Objects_in_Japan
|
[
"license:cc0-1.0",
"region:us"
] |
2023-09-23T11:30:58+00:00
|
{"license": "cc0-1.0"}
|
2023-09-23T13:19:40+00:00
|
[] |
[] |
TAGS
#license-cc0-1.0 #region-us
|
# Objects in japan.
This is a dataset to train text-to-image or other models without any copyright issue.
All materials used in this dataset are CC0 (Public domain /P.D.).
## Dataset Description
- Homepage:
- URL
-
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Objects in japan.\n\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).",
"## Dataset Description\n\n- Homepage:\n- URL\n- \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#license-cc0-1.0 #region-us \n",
"# Objects in japan.\n\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).",
"## Dataset Description\n\n- Homepage:\n- URL\n- \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
14,
45,
27,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#license-cc0-1.0 #region-us \n# Objects in japan.\n\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).## Dataset Description\n\n- Homepage:\n- URL\n- \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
0cc93383b39f36d3db8d6d438602f33b14bc54e6
|
# Places in japan.
This is a dataset to train text-to-image or other models without any copyright issue.
All materials used in this dataset are CC0 (Public domain /P.D.).
## Dataset Description
- **Homepage:**
- https://www.deviantart.com/japanmaterial
-
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
JapanDegitalMaterial/Places_in_Japan
|
[
"task_categories:text-to-image",
"language:en",
"language:ja",
"license:cc0-1.0",
"region:us"
] |
2023-09-23T11:35:06+00:00
|
{"language": ["en", "ja"], "license": "cc0-1.0", "task_categories": ["text-to-image"]}
|
2023-09-23T13:00:16+00:00
|
[] |
[
"en",
"ja"
] |
TAGS
#task_categories-text-to-image #language-English #language-Japanese #license-cc0-1.0 #region-us
|
# Places in japan.
This is a dataset to train text-to-image or other models without any copyright issue.
All materials used in this dataset are CC0 (Public domain /P.D.).
## Dataset Description
- Homepage:
- URL
-
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Places in japan.\n\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).",
"## Dataset Description\n\n- Homepage:\n- URL\n- \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-text-to-image #language-English #language-Japanese #license-cc0-1.0 #region-us \n",
"# Places in japan.\n\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).",
"## Dataset Description\n\n- Homepage:\n- URL\n- \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
36,
45,
27,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-text-to-image #language-English #language-Japanese #license-cc0-1.0 #region-us \n# Places in japan.\n\nThis is a dataset to train text-to-image or other models without any copyright issue.\nAll materials used in this dataset are CC0 (Public domain /P.D.).## Dataset Description\n\n- Homepage:\n- URL\n- \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
01f9e651883b41cbbe1121728ee1c98b7a1afd64
|
# Bangumi Image Base of Puella Magi Madoka Magica
This is the image base of bangumi Puella Magi Madoka Magica, we detected 17 characters, 2197 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 561 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 238 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 29 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 355 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 392 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 45 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 32 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 12 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 15 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 16 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 6 | [Download](10/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 11 | 58 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 150 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 64 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 13 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 13 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 198 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/puellamagimadokamagica
|
[
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] |
2023-09-23T11:35:46+00:00
|
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
|
2023-09-29T10:24:42+00:00
|
[] |
[] |
TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
|
Bangumi Image Base of Puella Magi Madoka Magica
===============================================
This is the image base of bangumi Puella Magi Madoka Magica, we detected 17 characters, 2197 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
|
[] |
[
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
[
25
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
9ce63f4d710883d2c65b15a93b01dfcadebf4800
|
<img src="Seaskull.jpeg" alt="seaskull" width="400" height="400">
# Seaskull Dataset
## Description
The Seaskull dataset follows the same format and purpose as the Helix dataset but contains distinct data. It has undergone a cleaning process to ensure data quality and usability.
## Dataset Details
- **Source Dataset**: Private Haribon dataset
- **Data Cleaning**: The Seaskull dataset has been cleaned to eliminate Null and NaN values, ensuring data reliability.
## License
Please adhere to the licensing terms provided by the dataset owner for access and usage of the Seaskull dataset.
## Citation
If you use the Seaskull dataset in your research or projects, make sure to follow the citation guidelines and requirements set forth by the dataset owner.
|
KaleidoSG/Seaskull
|
[
"task_categories:question-answering",
"task_categories:translation",
"task_categories:conversational",
"task_categories:summarization",
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2023-09-23T11:42:18+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "task_categories": ["question-answering", "translation", "conversational", "summarization"], "pretty_name": "seagull", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "Haribon.csv"}]}]}
|
2023-09-23T13:39:14+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-question-answering #task_categories-translation #task_categories-conversational #task_categories-summarization #language-English #license-cc-by-4.0 #region-us
|
<img src="URL" alt="seaskull" width="400" height="400">
# Seaskull Dataset
## Description
The Seaskull dataset follows the same format and purpose as the Helix dataset but contains distinct data. It has undergone a cleaning process to ensure data quality and usability.
## Dataset Details
- Source Dataset: Private Haribon dataset
- Data Cleaning: The Seaskull dataset has been cleaned to eliminate Null and NaN values, ensuring data reliability.
## License
Please adhere to the licensing terms provided by the dataset owner for access and usage of the Seaskull dataset.
If you use the Seaskull dataset in your research or projects, make sure to follow the citation guidelines and requirements set forth by the dataset owner.
|
[
"# Seaskull Dataset",
"## Description\nThe Seaskull dataset follows the same format and purpose as the Helix dataset but contains distinct data. It has undergone a cleaning process to ensure data quality and usability.",
"## Dataset Details\n- Source Dataset: Private Haribon dataset\n- Data Cleaning: The Seaskull dataset has been cleaned to eliminate Null and NaN values, ensuring data reliability.",
"## License\nPlease adhere to the licensing terms provided by the dataset owner for access and usage of the Seaskull dataset.\n\nIf you use the Seaskull dataset in your research or projects, make sure to follow the citation guidelines and requirements set forth by the dataset owner."
] |
[
"TAGS\n#task_categories-question-answering #task_categories-translation #task_categories-conversational #task_categories-summarization #language-English #license-cc-by-4.0 #region-us \n",
"# Seaskull Dataset",
"## Description\nThe Seaskull dataset follows the same format and purpose as the Helix dataset but contains distinct data. It has undergone a cleaning process to ensure data quality and usability.",
"## Dataset Details\n- Source Dataset: Private Haribon dataset\n- Data Cleaning: The Seaskull dataset has been cleaned to eliminate Null and NaN values, ensuring data reliability.",
"## License\nPlease adhere to the licensing terms provided by the dataset owner for access and usage of the Seaskull dataset.\n\nIf you use the Seaskull dataset in your research or projects, make sure to follow the citation guidelines and requirements set forth by the dataset owner."
] |
[
60,
6,
43,
46,
63
] |
[
"passage: TAGS\n#task_categories-question-answering #task_categories-translation #task_categories-conversational #task_categories-summarization #language-English #license-cc-by-4.0 #region-us \n# Seaskull Dataset## Description\nThe Seaskull dataset follows the same format and purpose as the Helix dataset but contains distinct data. It has undergone a cleaning process to ensure data quality and usability.## Dataset Details\n- Source Dataset: Private Haribon dataset\n- Data Cleaning: The Seaskull dataset has been cleaned to eliminate Null and NaN values, ensuring data reliability.## License\nPlease adhere to the licensing terms provided by the dataset owner for access and usage of the Seaskull dataset.\n\nIf you use the Seaskull dataset in your research or projects, make sure to follow the citation guidelines and requirements set forth by the dataset owner."
] |
d8255ec55a63d53df8c184d5fa8c7817d45e9660
|
# Dataset Card for "ar-higher-merged"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
taldarim/ar-higher-merged
|
[
"region:us"
] |
2023-09-23T11:53:14+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 374438, "num_examples": 280}, {"name": "test", "num_bytes": 370272, "num_examples": 236}], "download_size": 283162, "dataset_size": 744710}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
|
2023-09-23T11:53:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ar-higher-merged"
More Information needed
|
[
"# Dataset Card for \"ar-higher-merged\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ar-higher-merged\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ar-higher-merged\"\n\nMore Information needed"
] |
6770ab169c445c904bc248c1c0488ab05e362751
|
# Dataset Card for "data_docs.jsonl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
18moumi/data_docs.jsonl
|
[
"region:us"
] |
2023-09-23T12:32:56+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 44839, "num_examples": 142}], "download_size": 20983, "dataset_size": 44839}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-23T13:08:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data_docs.jsonl"
More Information needed
|
[
"# Dataset Card for \"data_docs.jsonl\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data_docs.jsonl\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data_docs.jsonl\"\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.