sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cc30d15c6ec7044a6e44d43a0c7ee0ea6a58ff4a
|
## Dataset Audio Duration
The dataset consists of 810 audio recordings, each accompanied by its respective transcription. The lexical corpus encompasses approximately 1,000 unique words.
- **Total Audio Duration**: 2801 seconds (approximately 34 minutes)
- **Average Audio Duration**: 3.41 seconds
The dataset offers valuable insights into the Wayuunaiki language's phonetic and linguistic characteristics. It's important to note that the dataset originates from recordings and transcriptions of the Bible in Wayuunaiki. Due to proprietary restrictions, the dataset cannot be shared publicly. The use of this data is protected under the principles of 'fair use' copyright.
|
orkidea/wayuu_CO_test
|
[
"task_categories:automatic-speech-recognition",
"size_categories:n<1K",
"language:guc",
"license:other",
"region:us"
] |
2023-10-11T15:13:55+00:00
|
{"language": ["guc"], "license": "other", "size_categories": ["n<1K"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "Wayuu language dataset", "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 123621131.0, "num_examples": 810}], "download_size": 122728843, "dataset_size": 123621131.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T19:34:51+00:00
|
[] |
[
"guc"
] |
TAGS
#task_categories-automatic-speech-recognition #size_categories-n<1K #language-Wayuu #license-other #region-us
|
## Dataset Audio Duration
The dataset consists of 810 audio recordings, each accompanied by its respective transcription. The lexical corpus encompasses approximately 1,000 unique words.
- Total Audio Duration: 2801 seconds (approximately 34 minutes)
- Average Audio Duration: 3.41 seconds
The dataset offers valuable insights into the Wayuunaiki language's phonetic and linguistic characteristics. It's important to note that the dataset originates from recordings and transcriptions of the Bible in Wayuunaiki. Due to proprietary restrictions, the dataset cannot be shared publicly. The use of this data is protected under the principles of 'fair use' copyright.
|
[
"## Dataset Audio Duration\n\nThe dataset consists of 810 audio recordings, each accompanied by its respective transcription. The lexical corpus encompasses approximately 1,000 unique words.\n\n- Total Audio Duration: 2801 seconds (approximately 34 minutes)\n- Average Audio Duration: 3.41 seconds\n\n\nThe dataset offers valuable insights into the Wayuunaiki language's phonetic and linguistic characteristics. It's important to note that the dataset originates from recordings and transcriptions of the Bible in Wayuunaiki. Due to proprietary restrictions, the dataset cannot be shared publicly. The use of this data is protected under the principles of 'fair use' copyright."
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #size_categories-n<1K #language-Wayuu #license-other #region-us \n",
"## Dataset Audio Duration\n\nThe dataset consists of 810 audio recordings, each accompanied by its respective transcription. The lexical corpus encompasses approximately 1,000 unique words.\n\n- Total Audio Duration: 2801 seconds (approximately 34 minutes)\n- Average Audio Duration: 3.41 seconds\n\n\nThe dataset offers valuable insights into the Wayuunaiki language's phonetic and linguistic characteristics. It's important to note that the dataset originates from recordings and transcriptions of the Bible in Wayuunaiki. Due to proprietary restrictions, the dataset cannot be shared publicly. The use of this data is protected under the principles of 'fair use' copyright."
] |
[
42,
154
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #size_categories-n<1K #language-Wayuu #license-other #region-us \n## Dataset Audio Duration\n\nThe dataset consists of 810 audio recordings, each accompanied by its respective transcription. The lexical corpus encompasses approximately 1,000 unique words.\n\n- Total Audio Duration: 2801 seconds (approximately 34 minutes)\n- Average Audio Duration: 3.41 seconds\n\n\nThe dataset offers valuable insights into the Wayuunaiki language's phonetic and linguistic characteristics. It's important to note that the dataset originates from recordings and transcriptions of the Bible in Wayuunaiki. Due to proprietary restrictions, the dataset cannot be shared publicly. The use of this data is protected under the principles of 'fair use' copyright."
] |
499aa3af8472887900fc292c7ed1364cfde9a670
|
# Dataset Card for "my-NFT-classifier-dataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hongerzh/my-NFT-classifier-dataset2
|
[
"region:us"
] |
2023-10-11T15:25:06+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "notsale", "1": "sale"}}}}], "splits": [{"name": "train", "num_bytes": 6616368416.534, "num_examples": 28754}, {"name": "validation", "num_bytes": 2371905486.911, "num_examples": 9593}, {"name": "test", "num_bytes": 2720948631.066, "num_examples": 9569}], "download_size": 8806005035, "dataset_size": 11709222534.511}}
|
2023-10-11T16:28:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "my-NFT-classifier-dataset2"
More Information needed
|
[
"# Dataset Card for \"my-NFT-classifier-dataset2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"my-NFT-classifier-dataset2\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"my-NFT-classifier-dataset2\"\n\nMore Information needed"
] |
fdc43900d6e9b8616dff8a79e14668cb082aea65
|
# Dataset Card for "Simple-Solidity-Slither-Vulnerabilities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Royal-lobster/Slither-Audited-Solidity-QA
|
[
"task_categories:question-answering",
"language:en",
"license:mit",
"solidity",
"alpaca",
"smart contracts",
"slither",
"region:us"
] |
2023-10-11T15:29:08+00:00
|
{"language": ["en"], "license": "mit", "task_categories": ["question-answering"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 519875022.0539211, "num_examples": 8611}, {"name": "test", "num_bytes": 100783891.24375294, "num_examples": 1748}, {"name": "validation", "num_bytes": 76457098.65464632, "num_examples": 1151}], "download_size": 98570750, "dataset_size": 697116011.9523203}, "tags": ["solidity", "alpaca", "smart contracts", "slither"]}
|
2023-10-11T15:52:46+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-question-answering #language-English #license-mit #solidity #alpaca #smart contracts #slither #region-us
|
# Dataset Card for "Simple-Solidity-Slither-Vulnerabilities"
More Information needed
|
[
"# Dataset Card for \"Simple-Solidity-Slither-Vulnerabilities\"\n\nMore Information needed"
] |
[
"TAGS\n#task_categories-question-answering #language-English #license-mit #solidity #alpaca #smart contracts #slither #region-us \n",
"# Dataset Card for \"Simple-Solidity-Slither-Vulnerabilities\"\n\nMore Information needed"
] |
[
41,
26
] |
[
"passage: TAGS\n#task_categories-question-answering #language-English #license-mit #solidity #alpaca #smart contracts #slither #region-us \n# Dataset Card for \"Simple-Solidity-Slither-Vulnerabilities\"\n\nMore Information needed"
] |
4d444acb61c5010e11ee596fea6f55543c399032
|
# Dataset Card for Evaluation run of adept/persimmon-8b-base
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/adept/persimmon-8b-base
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [adept/persimmon-8b-base](https://huggingface.co/adept/persimmon-8b-base) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_adept__persimmon-8b-base",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-11T16:30:00.730198](https://huggingface.co/datasets/open-llm-leaderboard/details_adept__persimmon-8b-base/blob/main/results_2023-10-11T16-30-00.730198.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4373382174928584,
"acc_stderr": 0.03537473296886481,
"acc_norm": 0.440779620602171,
"acc_norm_stderr": 0.03536781150443019,
"mc1": 0.22888616891064872,
"mc1_stderr": 0.014706994909055027,
"mc2": 0.378505315070287,
"mc2_stderr": 0.013586954257578736
},
"harness|arc:challenge|25": {
"acc": 0.41552901023890787,
"acc_stderr": 0.014401366641216384,
"acc_norm": 0.4274744027303754,
"acc_norm_stderr": 0.014456862944650652
},
"harness|hellaswag|10": {
"acc": 0.5203146783509262,
"acc_stderr": 0.004985661282998582,
"acc_norm": 0.7114120693089027,
"acc_norm_stderr": 0.004521798577922143
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384739,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384739
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.45925925925925926,
"acc_stderr": 0.04304979692464242,
"acc_norm": 0.45925925925925926,
"acc_norm_stderr": 0.04304979692464242
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.4276315789473684,
"acc_stderr": 0.04026097083296559,
"acc_norm": 0.4276315789473684,
"acc_norm_stderr": 0.04026097083296559
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.4377358490566038,
"acc_stderr": 0.030533338430467512,
"acc_norm": 0.4377358490566038,
"acc_norm_stderr": 0.030533338430467512
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5208333333333334,
"acc_stderr": 0.041775789507399935,
"acc_norm": 0.5208333333333334,
"acc_norm_stderr": 0.041775789507399935
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3988439306358382,
"acc_stderr": 0.03733626655383509,
"acc_norm": 0.3988439306358382,
"acc_norm_stderr": 0.03733626655383509
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.17647058823529413,
"acc_stderr": 0.03793281185307809,
"acc_norm": 0.17647058823529413,
"acc_norm_stderr": 0.03793281185307809
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.3617021276595745,
"acc_stderr": 0.03141082197596241,
"acc_norm": 0.3617021276595745,
"acc_norm_stderr": 0.03141082197596241
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.34210526315789475,
"acc_stderr": 0.04462917535336936,
"acc_norm": 0.34210526315789475,
"acc_norm_stderr": 0.04462917535336936
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.47586206896551725,
"acc_stderr": 0.041618085035015295,
"acc_norm": 0.47586206896551725,
"acc_norm_stderr": 0.041618085035015295
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2804232804232804,
"acc_stderr": 0.023135287974325642,
"acc_norm": 0.2804232804232804,
"acc_norm_stderr": 0.023135287974325642
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.04390259265377563,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.04390259265377563
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.4838709677419355,
"acc_stderr": 0.028429203176724555,
"acc_norm": 0.4838709677419355,
"acc_norm_stderr": 0.028429203176724555
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.33004926108374383,
"acc_stderr": 0.033085304262282574,
"acc_norm": 0.33004926108374383,
"acc_norm_stderr": 0.033085304262282574
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.5696969696969697,
"acc_stderr": 0.03866225962879077,
"acc_norm": 0.5696969696969697,
"acc_norm_stderr": 0.03866225962879077
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.5050505050505051,
"acc_stderr": 0.035621707606254015,
"acc_norm": 0.5050505050505051,
"acc_norm_stderr": 0.035621707606254015
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.5181347150259067,
"acc_stderr": 0.036060650018329185,
"acc_norm": 0.5181347150259067,
"acc_norm_stderr": 0.036060650018329185
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.39487179487179486,
"acc_stderr": 0.02478431694215638,
"acc_norm": 0.39487179487179486,
"acc_norm_stderr": 0.02478431694215638
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2851851851851852,
"acc_stderr": 0.027528599210340492,
"acc_norm": 0.2851851851851852,
"acc_norm_stderr": 0.027528599210340492
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.39915966386554624,
"acc_stderr": 0.031811100324139245,
"acc_norm": 0.39915966386554624,
"acc_norm_stderr": 0.031811100324139245
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2913907284768212,
"acc_stderr": 0.03710185726119994,
"acc_norm": 0.2913907284768212,
"acc_norm_stderr": 0.03710185726119994
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.5321100917431193,
"acc_stderr": 0.021393071222680797,
"acc_norm": 0.5321100917431193,
"acc_norm_stderr": 0.021393071222680797
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.2824074074074074,
"acc_stderr": 0.03070137211151094,
"acc_norm": 0.2824074074074074,
"acc_norm_stderr": 0.03070137211151094
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.5882352941176471,
"acc_stderr": 0.034542365853806094,
"acc_norm": 0.5882352941176471,
"acc_norm_stderr": 0.034542365853806094
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.5569620253164557,
"acc_stderr": 0.032335327775334835,
"acc_norm": 0.5569620253164557,
"acc_norm_stderr": 0.032335327775334835
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.42152466367713004,
"acc_stderr": 0.03314190222110657,
"acc_norm": 0.42152466367713004,
"acc_norm_stderr": 0.03314190222110657
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5419847328244275,
"acc_stderr": 0.04369802690578756,
"acc_norm": 0.5419847328244275,
"acc_norm_stderr": 0.04369802690578756
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.5289256198347108,
"acc_stderr": 0.04556710331269498,
"acc_norm": 0.5289256198347108,
"acc_norm_stderr": 0.04556710331269498
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.42592592592592593,
"acc_stderr": 0.047803436269367894,
"acc_norm": 0.42592592592592593,
"acc_norm_stderr": 0.047803436269367894
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.588957055214724,
"acc_stderr": 0.038656978537853624,
"acc_norm": 0.588957055214724,
"acc_norm_stderr": 0.038656978537853624
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3482142857142857,
"acc_stderr": 0.04521829902833585,
"acc_norm": 0.3482142857142857,
"acc_norm_stderr": 0.04521829902833585
},
"harness|hendrycksTest-management|5": {
"acc": 0.49514563106796117,
"acc_stderr": 0.049505043821289195,
"acc_norm": 0.49514563106796117,
"acc_norm_stderr": 0.049505043821289195
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.6367521367521367,
"acc_stderr": 0.03150712523091264,
"acc_norm": 0.6367521367521367,
"acc_norm_stderr": 0.03150712523091264
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.5466155810983397,
"acc_stderr": 0.017802087135850304,
"acc_norm": 0.5466155810983397,
"acc_norm_stderr": 0.017802087135850304
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.4653179190751445,
"acc_stderr": 0.0268542579282589,
"acc_norm": 0.4653179190751445,
"acc_norm_stderr": 0.0268542579282589
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24022346368715083,
"acc_stderr": 0.014288343803925296,
"acc_norm": 0.24022346368715083,
"acc_norm_stderr": 0.014288343803925296
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.4869281045751634,
"acc_stderr": 0.028620130800700246,
"acc_norm": 0.4869281045751634,
"acc_norm_stderr": 0.028620130800700246
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.4758842443729904,
"acc_stderr": 0.028365041542564577,
"acc_norm": 0.4758842443729904,
"acc_norm_stderr": 0.028365041542564577
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.4660493827160494,
"acc_stderr": 0.027756535257347666,
"acc_norm": 0.4660493827160494,
"acc_norm_stderr": 0.027756535257347666
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.35815602836879434,
"acc_stderr": 0.028602085862759422,
"acc_norm": 0.35815602836879434,
"acc_norm_stderr": 0.028602085862759422
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3344198174706649,
"acc_stderr": 0.012049668983214933,
"acc_norm": 0.3344198174706649,
"acc_norm_stderr": 0.012049668983214933
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.39705882352941174,
"acc_stderr": 0.029722152099280058,
"acc_norm": 0.39705882352941174,
"acc_norm_stderr": 0.029722152099280058
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.38562091503267976,
"acc_stderr": 0.01969145905235415,
"acc_norm": 0.38562091503267976,
"acc_norm_stderr": 0.01969145905235415
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5181818181818182,
"acc_stderr": 0.04785964010794917,
"acc_norm": 0.5181818181818182,
"acc_norm_stderr": 0.04785964010794917
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.40408163265306124,
"acc_stderr": 0.03141470802586589,
"acc_norm": 0.40408163265306124,
"acc_norm_stderr": 0.03141470802586589
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.572139303482587,
"acc_stderr": 0.03498541988407795,
"acc_norm": 0.572139303482587,
"acc_norm_stderr": 0.03498541988407795
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4397590361445783,
"acc_stderr": 0.03864139923699121,
"acc_norm": 0.4397590361445783,
"acc_norm_stderr": 0.03864139923699121
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.5964912280701754,
"acc_stderr": 0.03762738699917057,
"acc_norm": 0.5964912280701754,
"acc_norm_stderr": 0.03762738699917057
},
"harness|truthfulqa:mc|0": {
"mc1": 0.22888616891064872,
"mc1_stderr": 0.014706994909055027,
"mc2": 0.378505315070287,
"mc2_stderr": 0.013586954257578736
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_adept__persimmon-8b-base
|
[
"region:us"
] |
2023-10-11T15:30:19+00:00
|
{"pretty_name": "Evaluation run of adept/persimmon-8b-base", "dataset_summary": "Dataset automatically created during the evaluation run of model [adept/persimmon-8b-base](https://huggingface.co/adept/persimmon-8b-base) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_adept__persimmon-8b-base\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-11T16:30:00.730198](https://huggingface.co/datasets/open-llm-leaderboard/details_adept__persimmon-8b-base/blob/main/results_2023-10-11T16-30-00.730198.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4373382174928584,\n \"acc_stderr\": 0.03537473296886481,\n \"acc_norm\": 0.440779620602171,\n \"acc_norm_stderr\": 0.03536781150443019,\n \"mc1\": 0.22888616891064872,\n \"mc1_stderr\": 0.014706994909055027,\n \"mc2\": 0.378505315070287,\n \"mc2_stderr\": 0.013586954257578736\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.41552901023890787,\n \"acc_stderr\": 0.014401366641216384,\n \"acc_norm\": 0.4274744027303754,\n \"acc_norm_stderr\": 0.014456862944650652\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5203146783509262,\n \"acc_stderr\": 0.004985661282998582,\n \"acc_norm\": 0.7114120693089027,\n \"acc_norm_stderr\": 0.004521798577922143\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384739,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384739\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.45925925925925926,\n \"acc_stderr\": 0.04304979692464242,\n \"acc_norm\": 0.45925925925925926,\n \"acc_norm_stderr\": 0.04304979692464242\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.4276315789473684,\n \"acc_stderr\": 0.04026097083296559,\n \"acc_norm\": 0.4276315789473684,\n \"acc_norm_stderr\": 0.04026097083296559\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.4377358490566038,\n \"acc_stderr\": 0.030533338430467512,\n \"acc_norm\": 0.4377358490566038,\n \"acc_norm_stderr\": 0.030533338430467512\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5208333333333334,\n \"acc_stderr\": 0.041775789507399935,\n \"acc_norm\": 0.5208333333333334,\n \"acc_norm_stderr\": 0.041775789507399935\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3988439306358382,\n \"acc_stderr\": 0.03733626655383509,\n \"acc_norm\": 0.3988439306358382,\n \"acc_norm_stderr\": 0.03733626655383509\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.17647058823529413,\n \"acc_stderr\": 0.03793281185307809,\n \"acc_norm\": 0.17647058823529413,\n \"acc_norm_stderr\": 0.03793281185307809\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.3617021276595745,\n \"acc_stderr\": 0.03141082197596241,\n \"acc_norm\": 0.3617021276595745,\n \"acc_norm_stderr\": 0.03141082197596241\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.34210526315789475,\n \"acc_stderr\": 0.04462917535336936,\n \"acc_norm\": 0.34210526315789475,\n \"acc_norm_stderr\": 0.04462917535336936\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.47586206896551725,\n \"acc_stderr\": 0.041618085035015295,\n \"acc_norm\": 0.47586206896551725,\n \"acc_norm_stderr\": 0.041618085035015295\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2804232804232804,\n \"acc_stderr\": 0.023135287974325642,\n \"acc_norm\": 0.2804232804232804,\n \"acc_norm_stderr\": 0.023135287974325642\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.04390259265377563,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.04390259265377563\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.4838709677419355,\n \"acc_stderr\": 0.028429203176724555,\n \"acc_norm\": 0.4838709677419355,\n \"acc_norm_stderr\": 0.028429203176724555\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.33004926108374383,\n \"acc_stderr\": 0.033085304262282574,\n \"acc_norm\": 0.33004926108374383,\n \"acc_norm_stderr\": 0.033085304262282574\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.5696969696969697,\n \"acc_stderr\": 0.03866225962879077,\n \"acc_norm\": 0.5696969696969697,\n \"acc_norm_stderr\": 0.03866225962879077\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.5050505050505051,\n \"acc_stderr\": 0.035621707606254015,\n \"acc_norm\": 0.5050505050505051,\n \"acc_norm_stderr\": 0.035621707606254015\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.5181347150259067,\n \"acc_stderr\": 0.036060650018329185,\n \"acc_norm\": 0.5181347150259067,\n \"acc_norm_stderr\": 0.036060650018329185\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.39487179487179486,\n \"acc_stderr\": 0.02478431694215638,\n \"acc_norm\": 0.39487179487179486,\n \"acc_norm_stderr\": 0.02478431694215638\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2851851851851852,\n \"acc_stderr\": 0.027528599210340492,\n \"acc_norm\": 0.2851851851851852,\n \"acc_norm_stderr\": 0.027528599210340492\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.39915966386554624,\n \"acc_stderr\": 0.031811100324139245,\n \"acc_norm\": 0.39915966386554624,\n \"acc_norm_stderr\": 0.031811100324139245\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2913907284768212,\n \"acc_stderr\": 0.03710185726119994,\n \"acc_norm\": 0.2913907284768212,\n \"acc_norm_stderr\": 0.03710185726119994\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.5321100917431193,\n \"acc_stderr\": 0.021393071222680797,\n \"acc_norm\": 0.5321100917431193,\n \"acc_norm_stderr\": 0.021393071222680797\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.2824074074074074,\n \"acc_stderr\": 0.03070137211151094,\n \"acc_norm\": 0.2824074074074074,\n \"acc_norm_stderr\": 0.03070137211151094\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.5882352941176471,\n \"acc_stderr\": 0.034542365853806094,\n \"acc_norm\": 0.5882352941176471,\n \"acc_norm_stderr\": 0.034542365853806094\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.5569620253164557,\n \"acc_stderr\": 0.032335327775334835,\n \"acc_norm\": 0.5569620253164557,\n \"acc_norm_stderr\": 0.032335327775334835\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.42152466367713004,\n \"acc_stderr\": 0.03314190222110657,\n \"acc_norm\": 0.42152466367713004,\n \"acc_norm_stderr\": 0.03314190222110657\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5419847328244275,\n \"acc_stderr\": 0.04369802690578756,\n \"acc_norm\": 0.5419847328244275,\n \"acc_norm_stderr\": 0.04369802690578756\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.5289256198347108,\n \"acc_stderr\": 0.04556710331269498,\n \"acc_norm\": 0.5289256198347108,\n \"acc_norm_stderr\": 0.04556710331269498\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.42592592592592593,\n \"acc_stderr\": 0.047803436269367894,\n \"acc_norm\": 0.42592592592592593,\n \"acc_norm_stderr\": 0.047803436269367894\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.588957055214724,\n \"acc_stderr\": 0.038656978537853624,\n \"acc_norm\": 0.588957055214724,\n \"acc_norm_stderr\": 0.038656978537853624\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3482142857142857,\n \"acc_stderr\": 0.04521829902833585,\n \"acc_norm\": 0.3482142857142857,\n \"acc_norm_stderr\": 0.04521829902833585\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.49514563106796117,\n \"acc_stderr\": 0.049505043821289195,\n \"acc_norm\": 0.49514563106796117,\n \"acc_norm_stderr\": 0.049505043821289195\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.6367521367521367,\n \"acc_stderr\": 0.03150712523091264,\n \"acc_norm\": 0.6367521367521367,\n \"acc_norm_stderr\": 0.03150712523091264\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.5466155810983397,\n \"acc_stderr\": 0.017802087135850304,\n \"acc_norm\": 0.5466155810983397,\n \"acc_norm_stderr\": 0.017802087135850304\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.4653179190751445,\n \"acc_stderr\": 0.0268542579282589,\n \"acc_norm\": 0.4653179190751445,\n \"acc_norm_stderr\": 0.0268542579282589\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24022346368715083,\n \"acc_stderr\": 0.014288343803925296,\n \"acc_norm\": 0.24022346368715083,\n \"acc_norm_stderr\": 0.014288343803925296\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.4869281045751634,\n \"acc_stderr\": 0.028620130800700246,\n \"acc_norm\": 0.4869281045751634,\n \"acc_norm_stderr\": 0.028620130800700246\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.4758842443729904,\n \"acc_stderr\": 0.028365041542564577,\n \"acc_norm\": 0.4758842443729904,\n \"acc_norm_stderr\": 0.028365041542564577\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.4660493827160494,\n \"acc_stderr\": 0.027756535257347666,\n \"acc_norm\": 0.4660493827160494,\n \"acc_norm_stderr\": 0.027756535257347666\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.35815602836879434,\n \"acc_stderr\": 0.028602085862759422,\n \"acc_norm\": 0.35815602836879434,\n \"acc_norm_stderr\": 0.028602085862759422\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3344198174706649,\n \"acc_stderr\": 0.012049668983214933,\n \"acc_norm\": 0.3344198174706649,\n \"acc_norm_stderr\": 0.012049668983214933\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.39705882352941174,\n \"acc_stderr\": 0.029722152099280058,\n \"acc_norm\": 0.39705882352941174,\n \"acc_norm_stderr\": 0.029722152099280058\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.38562091503267976,\n \"acc_stderr\": 0.01969145905235415,\n \"acc_norm\": 0.38562091503267976,\n \"acc_norm_stderr\": 0.01969145905235415\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5181818181818182,\n \"acc_stderr\": 0.04785964010794917,\n \"acc_norm\": 0.5181818181818182,\n \"acc_norm_stderr\": 0.04785964010794917\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.40408163265306124,\n \"acc_stderr\": 0.03141470802586589,\n \"acc_norm\": 0.40408163265306124,\n \"acc_norm_stderr\": 0.03141470802586589\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.572139303482587,\n \"acc_stderr\": 0.03498541988407795,\n \"acc_norm\": 0.572139303482587,\n \"acc_norm_stderr\": 0.03498541988407795\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4397590361445783,\n \"acc_stderr\": 0.03864139923699121,\n \"acc_norm\": 0.4397590361445783,\n \"acc_norm_stderr\": 0.03864139923699121\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.5964912280701754,\n \"acc_stderr\": 0.03762738699917057,\n \"acc_norm\": 0.5964912280701754,\n \"acc_norm_stderr\": 0.03762738699917057\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.22888616891064872,\n \"mc1_stderr\": 0.014706994909055027,\n \"mc2\": 0.378505315070287,\n \"mc2_stderr\": 0.013586954257578736\n }\n}\n```", "repo_url": "https://huggingface.co/adept/persimmon-8b-base", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|arc:challenge|25_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hellaswag|10_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T16-30-00.730198.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T16-30-00.730198.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T16_30_00.730198", "path": ["results_2023-10-11T16-30-00.730198.parquet"]}, {"split": "latest", "path": ["results_2023-10-11T16-30-00.730198.parquet"]}]}]}
|
2023-10-11T15:31:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of adept/persimmon-8b-base
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model adept/persimmon-8b-base on the Open LLM Leaderboard.
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-11T16:30:00.730198(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of adept/persimmon-8b-base",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model adept/persimmon-8b-base on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-11T16:30:00.730198(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of adept/persimmon-8b-base",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model adept/persimmon-8b-base on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-11T16:30:00.730198(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
18,
31,
166,
68,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of adept/persimmon-8b-base## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model adept/persimmon-8b-base on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-11T16:30:00.730198(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
3d65c277715d3f7b368ff150281726f8d00af716
|
# I2P - Adversarial Samples
We here provide a subset of the inappropriate image prompts (I2P) benchmark that are solid candidates for adversarial testing.
Specifically, all prompts in this dataset provided here are reasonably likely to produce inappropriate images and bypass the MidJourney prompt filter.
More details are provided in our AACL workshop paper: ["Distilling Adversarial Prompts from Safety Benchmarks:
Report for the Adversarial Nibbler Challenge"](https://arxiv.org/abs/2309.11575)
|
AIML-TUDA/i2p-adversarial-split
|
[
"license:mit",
"arxiv:2309.11575",
"region:us"
] |
2023-10-11T15:39:57+00:00
|
{"license": "mit"}
|
2023-10-11T15:45:31+00:00
|
[
"2309.11575"
] |
[] |
TAGS
#license-mit #arxiv-2309.11575 #region-us
|
# I2P - Adversarial Samples
We here provide a subset of the inappropriate image prompts (I2P) benchmark that are solid candidates for adversarial testing.
Specifically, all prompts in this dataset provided here are reasonably likely to produce inappropriate images and bypass the MidJourney prompt filter.
More details are provided in our AACL workshop paper: "Distilling Adversarial Prompts from Safety Benchmarks:
Report for the Adversarial Nibbler Challenge"
|
[
"# I2P - Adversarial Samples \n\nWe here provide a subset of the inappropriate image prompts (I2P) benchmark that are solid candidates for adversarial testing. \nSpecifically, all prompts in this dataset provided here are reasonably likely to produce inappropriate images and bypass the MidJourney prompt filter. \n\nMore details are provided in our AACL workshop paper: \"Distilling Adversarial Prompts from Safety Benchmarks:\nReport for the Adversarial Nibbler Challenge\""
] |
[
"TAGS\n#license-mit #arxiv-2309.11575 #region-us \n",
"# I2P - Adversarial Samples \n\nWe here provide a subset of the inappropriate image prompts (I2P) benchmark that are solid candidates for adversarial testing. \nSpecifically, all prompts in this dataset provided here are reasonably likely to produce inappropriate images and bypass the MidJourney prompt filter. \n\nMore details are provided in our AACL workshop paper: \"Distilling Adversarial Prompts from Safety Benchmarks:\nReport for the Adversarial Nibbler Challenge\""
] |
[
19,
110
] |
[
"passage: TAGS\n#license-mit #arxiv-2309.11575 #region-us \n# I2P - Adversarial Samples \n\nWe here provide a subset of the inappropriate image prompts (I2P) benchmark that are solid candidates for adversarial testing. \nSpecifically, all prompts in this dataset provided here are reasonably likely to produce inappropriate images and bypass the MidJourney prompt filter. \n\nMore details are provided in our AACL workshop paper: \"Distilling Adversarial Prompts from Safety Benchmarks:\nReport for the Adversarial Nibbler Challenge\""
] |
53f35250df63859ad56754867f63b13ea0ee32f1
|
# Dataset Card for Evaluation run of adept/persimmon-8b-chat
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/adept/persimmon-8b-chat
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [adept/persimmon-8b-chat](https://huggingface.co/adept/persimmon-8b-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_adept__persimmon-8b-chat",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-11T16:42:26.599502](https://huggingface.co/datasets/open-llm-leaderboard/details_adept__persimmon-8b-chat/blob/main/results_2023-10-11T16-42-26.599502.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.451177873431513,
"acc_stderr": 0.035194823330966046,
"acc_norm": 0.45457780958443955,
"acc_norm_stderr": 0.03518601630486001,
"mc1": 0.22276621787025705,
"mc1_stderr": 0.014566506961396736,
"mc2": 0.35928086491565836,
"mc2_stderr": 0.013539847732342817
},
"harness|arc:challenge|25": {
"acc": 0.43856655290102387,
"acc_stderr": 0.014500682618212864,
"acc_norm": 0.4496587030716723,
"acc_norm_stderr": 0.014537144444284738
},
"harness|hellaswag|10": {
"acc": 0.5435172276438957,
"acc_stderr": 0.0049708466975523094,
"acc_norm": 0.7330213104959171,
"acc_norm_stderr": 0.004414770331224652
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4888888888888889,
"acc_stderr": 0.04318275491977976,
"acc_norm": 0.4888888888888889,
"acc_norm_stderr": 0.04318275491977976
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.4407894736842105,
"acc_stderr": 0.04040311062490436,
"acc_norm": 0.4407894736842105,
"acc_norm_stderr": 0.04040311062490436
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5132075471698113,
"acc_stderr": 0.030762134874500482,
"acc_norm": 0.5132075471698113,
"acc_norm_stderr": 0.030762134874500482
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5347222222222222,
"acc_stderr": 0.04171115858181618,
"acc_norm": 0.5347222222222222,
"acc_norm_stderr": 0.04171115858181618
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720683,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720683
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.42196531791907516,
"acc_stderr": 0.037657466938651504,
"acc_norm": 0.42196531791907516,
"acc_norm_stderr": 0.037657466938651504
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.19607843137254902,
"acc_stderr": 0.03950581861179961,
"acc_norm": 0.19607843137254902,
"acc_norm_stderr": 0.03950581861179961
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4085106382978723,
"acc_stderr": 0.03213418026701576,
"acc_norm": 0.4085106382978723,
"acc_norm_stderr": 0.03213418026701576
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.041424397194893624,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.041424397194893624
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.42758620689655175,
"acc_stderr": 0.041227371113703316,
"acc_norm": 0.42758620689655175,
"acc_norm_stderr": 0.041227371113703316
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.28835978835978837,
"acc_stderr": 0.0233306540545359,
"acc_norm": 0.28835978835978837,
"acc_norm_stderr": 0.0233306540545359
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.040061680838488774,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.040061680838488774
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.047609522856952365,
"acc_norm": 0.34,
"acc_norm_stderr": 0.047609522856952365
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5225806451612903,
"acc_stderr": 0.028414985019707868,
"acc_norm": 0.5225806451612903,
"acc_norm_stderr": 0.028414985019707868
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.33497536945812806,
"acc_stderr": 0.033208527423483104,
"acc_norm": 0.33497536945812806,
"acc_norm_stderr": 0.033208527423483104
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.593939393939394,
"acc_stderr": 0.03834816355401181,
"acc_norm": 0.593939393939394,
"acc_norm_stderr": 0.03834816355401181
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.5202020202020202,
"acc_stderr": 0.03559443565563917,
"acc_norm": 0.5202020202020202,
"acc_norm_stderr": 0.03559443565563917
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.5854922279792746,
"acc_stderr": 0.035553003195576686,
"acc_norm": 0.5854922279792746,
"acc_norm_stderr": 0.035553003195576686
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.4358974358974359,
"acc_stderr": 0.02514180151117749,
"acc_norm": 0.4358974358974359,
"acc_norm_stderr": 0.02514180151117749
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.26296296296296295,
"acc_stderr": 0.02684205787383371,
"acc_norm": 0.26296296296296295,
"acc_norm_stderr": 0.02684205787383371
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.41596638655462187,
"acc_stderr": 0.03201650100739615,
"acc_norm": 0.41596638655462187,
"acc_norm_stderr": 0.03201650100739615
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.24503311258278146,
"acc_stderr": 0.03511807571804726,
"acc_norm": 0.24503311258278146,
"acc_norm_stderr": 0.03511807571804726
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.5596330275229358,
"acc_stderr": 0.02128431062376155,
"acc_norm": 0.5596330275229358,
"acc_norm_stderr": 0.02128431062376155
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.030546745264953174,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.030546745264953174
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.553921568627451,
"acc_stderr": 0.034888454513049734,
"acc_norm": 0.553921568627451,
"acc_norm_stderr": 0.034888454513049734
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6286919831223629,
"acc_stderr": 0.031450686007448596,
"acc_norm": 0.6286919831223629,
"acc_norm_stderr": 0.031450686007448596
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5515695067264574,
"acc_stderr": 0.033378837362550984,
"acc_norm": 0.5515695067264574,
"acc_norm_stderr": 0.033378837362550984
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5267175572519084,
"acc_stderr": 0.04379024936553894,
"acc_norm": 0.5267175572519084,
"acc_norm_stderr": 0.04379024936553894
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.5867768595041323,
"acc_stderr": 0.04495087843548408,
"acc_norm": 0.5867768595041323,
"acc_norm_stderr": 0.04495087843548408
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.4537037037037037,
"acc_stderr": 0.048129173245368216,
"acc_norm": 0.4537037037037037,
"acc_norm_stderr": 0.048129173245368216
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.558282208588957,
"acc_stderr": 0.03901591825836184,
"acc_norm": 0.558282208588957,
"acc_norm_stderr": 0.03901591825836184
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.38392857142857145,
"acc_stderr": 0.04616143075028547,
"acc_norm": 0.38392857142857145,
"acc_norm_stderr": 0.04616143075028547
},
"harness|hendrycksTest-management|5": {
"acc": 0.6019417475728155,
"acc_stderr": 0.048467482539772386,
"acc_norm": 0.6019417475728155,
"acc_norm_stderr": 0.048467482539772386
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.6752136752136753,
"acc_stderr": 0.03067902276549883,
"acc_norm": 0.6752136752136753,
"acc_norm_stderr": 0.03067902276549883
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.5747126436781609,
"acc_stderr": 0.01767922548943145,
"acc_norm": 0.5747126436781609,
"acc_norm_stderr": 0.01767922548943145
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.4797687861271676,
"acc_stderr": 0.026897049996382868,
"acc_norm": 0.4797687861271676,
"acc_norm_stderr": 0.026897049996382868
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24022346368715083,
"acc_stderr": 0.014288343803925293,
"acc_norm": 0.24022346368715083,
"acc_norm_stderr": 0.014288343803925293
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.4477124183006536,
"acc_stderr": 0.028472938478033522,
"acc_norm": 0.4477124183006536,
"acc_norm_stderr": 0.028472938478033522
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.4790996784565916,
"acc_stderr": 0.028373270961069414,
"acc_norm": 0.4790996784565916,
"acc_norm_stderr": 0.028373270961069414
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.45987654320987653,
"acc_stderr": 0.027731022753539277,
"acc_norm": 0.45987654320987653,
"acc_norm_stderr": 0.027731022753539277
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.32978723404255317,
"acc_stderr": 0.028045946942042405,
"acc_norm": 0.32978723404255317,
"acc_norm_stderr": 0.028045946942042405
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3546284224250326,
"acc_stderr": 0.01221857643909016,
"acc_norm": 0.3546284224250326,
"acc_norm_stderr": 0.01221857643909016
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.33455882352941174,
"acc_stderr": 0.028661996202335307,
"acc_norm": 0.33455882352941174,
"acc_norm_stderr": 0.028661996202335307
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.434640522875817,
"acc_stderr": 0.020054269200726452,
"acc_norm": 0.434640522875817,
"acc_norm_stderr": 0.020054269200726452
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5363636363636364,
"acc_stderr": 0.04776449162396197,
"acc_norm": 0.5363636363636364,
"acc_norm_stderr": 0.04776449162396197
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.39591836734693875,
"acc_stderr": 0.03130802899065685,
"acc_norm": 0.39591836734693875,
"acc_norm_stderr": 0.03130802899065685
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6218905472636815,
"acc_stderr": 0.03428867848778658,
"acc_norm": 0.6218905472636815,
"acc_norm_stderr": 0.03428867848778658
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.64,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.64,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-virology|5": {
"acc": 0.40963855421686746,
"acc_stderr": 0.03828401115079022,
"acc_norm": 0.40963855421686746,
"acc_norm_stderr": 0.03828401115079022
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6023391812865497,
"acc_stderr": 0.03753638955761691,
"acc_norm": 0.6023391812865497,
"acc_norm_stderr": 0.03753638955761691
},
"harness|truthfulqa:mc|0": {
"mc1": 0.22276621787025705,
"mc1_stderr": 0.014566506961396736,
"mc2": 0.35928086491565836,
"mc2_stderr": 0.013539847732342817
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_adept__persimmon-8b-chat
|
[
"region:us"
] |
2023-10-11T15:42:44+00:00
|
{"pretty_name": "Evaluation run of adept/persimmon-8b-chat", "dataset_summary": "Dataset automatically created during the evaluation run of model [adept/persimmon-8b-chat](https://huggingface.co/adept/persimmon-8b-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_adept__persimmon-8b-chat\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-11T16:42:26.599502](https://huggingface.co/datasets/open-llm-leaderboard/details_adept__persimmon-8b-chat/blob/main/results_2023-10-11T16-42-26.599502.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.451177873431513,\n \"acc_stderr\": 0.035194823330966046,\n \"acc_norm\": 0.45457780958443955,\n \"acc_norm_stderr\": 0.03518601630486001,\n \"mc1\": 0.22276621787025705,\n \"mc1_stderr\": 0.014566506961396736,\n \"mc2\": 0.35928086491565836,\n \"mc2_stderr\": 0.013539847732342817\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.43856655290102387,\n \"acc_stderr\": 0.014500682618212864,\n \"acc_norm\": 0.4496587030716723,\n \"acc_norm_stderr\": 0.014537144444284738\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5435172276438957,\n \"acc_stderr\": 0.0049708466975523094,\n \"acc_norm\": 0.7330213104959171,\n \"acc_norm_stderr\": 0.004414770331224652\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4888888888888889,\n \"acc_stderr\": 0.04318275491977976,\n \"acc_norm\": 0.4888888888888889,\n \"acc_norm_stderr\": 0.04318275491977976\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.4407894736842105,\n \"acc_stderr\": 0.04040311062490436,\n \"acc_norm\": 0.4407894736842105,\n \"acc_norm_stderr\": 0.04040311062490436\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5132075471698113,\n \"acc_stderr\": 0.030762134874500482,\n \"acc_norm\": 0.5132075471698113,\n \"acc_norm_stderr\": 0.030762134874500482\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5347222222222222,\n \"acc_stderr\": 0.04171115858181618,\n \"acc_norm\": 0.5347222222222222,\n \"acc_norm_stderr\": 0.04171115858181618\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720683,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720683\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.42196531791907516,\n \"acc_stderr\": 0.037657466938651504,\n \"acc_norm\": 0.42196531791907516,\n \"acc_norm_stderr\": 0.037657466938651504\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.19607843137254902,\n \"acc_stderr\": 0.03950581861179961,\n \"acc_norm\": 0.19607843137254902,\n \"acc_norm_stderr\": 0.03950581861179961\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.4085106382978723,\n \"acc_stderr\": 0.03213418026701576,\n \"acc_norm\": 0.4085106382978723,\n \"acc_norm_stderr\": 0.03213418026701576\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.041424397194893624,\n \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.041424397194893624\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.42758620689655175,\n \"acc_stderr\": 0.041227371113703316,\n \"acc_norm\": 0.42758620689655175,\n \"acc_norm_stderr\": 0.041227371113703316\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.28835978835978837,\n \"acc_stderr\": 0.0233306540545359,\n \"acc_norm\": 0.28835978835978837,\n \"acc_norm_stderr\": 0.0233306540545359\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.040061680838488774,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.040061680838488774\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952365,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952365\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5225806451612903,\n \"acc_stderr\": 0.028414985019707868,\n \"acc_norm\": 0.5225806451612903,\n \"acc_norm_stderr\": 0.028414985019707868\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.33497536945812806,\n \"acc_stderr\": 0.033208527423483104,\n \"acc_norm\": 0.33497536945812806,\n \"acc_norm_stderr\": 0.033208527423483104\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.593939393939394,\n \"acc_stderr\": 0.03834816355401181,\n \"acc_norm\": 0.593939393939394,\n \"acc_norm_stderr\": 0.03834816355401181\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.5202020202020202,\n \"acc_stderr\": 0.03559443565563917,\n \"acc_norm\": 0.5202020202020202,\n \"acc_norm_stderr\": 0.03559443565563917\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.5854922279792746,\n \"acc_stderr\": 0.035553003195576686,\n \"acc_norm\": 0.5854922279792746,\n \"acc_norm_stderr\": 0.035553003195576686\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.4358974358974359,\n \"acc_stderr\": 0.02514180151117749,\n \"acc_norm\": 0.4358974358974359,\n \"acc_norm_stderr\": 0.02514180151117749\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26296296296296295,\n \"acc_stderr\": 0.02684205787383371,\n \"acc_norm\": 0.26296296296296295,\n \"acc_norm_stderr\": 0.02684205787383371\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.41596638655462187,\n \"acc_stderr\": 0.03201650100739615,\n \"acc_norm\": 0.41596638655462187,\n \"acc_norm_stderr\": 0.03201650100739615\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.24503311258278146,\n \"acc_stderr\": 0.03511807571804726,\n \"acc_norm\": 0.24503311258278146,\n \"acc_norm_stderr\": 0.03511807571804726\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.5596330275229358,\n \"acc_stderr\": 0.02128431062376155,\n \"acc_norm\": 0.5596330275229358,\n \"acc_norm_stderr\": 0.02128431062376155\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.030546745264953174,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.030546745264953174\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.553921568627451,\n \"acc_stderr\": 0.034888454513049734,\n \"acc_norm\": 0.553921568627451,\n \"acc_norm_stderr\": 0.034888454513049734\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6286919831223629,\n \"acc_stderr\": 0.031450686007448596,\n \"acc_norm\": 0.6286919831223629,\n \"acc_norm_stderr\": 0.031450686007448596\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5515695067264574,\n \"acc_stderr\": 0.033378837362550984,\n \"acc_norm\": 0.5515695067264574,\n \"acc_norm_stderr\": 0.033378837362550984\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5267175572519084,\n \"acc_stderr\": 0.04379024936553894,\n \"acc_norm\": 0.5267175572519084,\n \"acc_norm_stderr\": 0.04379024936553894\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.5867768595041323,\n \"acc_stderr\": 0.04495087843548408,\n \"acc_norm\": 0.5867768595041323,\n \"acc_norm_stderr\": 0.04495087843548408\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.4537037037037037,\n \"acc_stderr\": 0.048129173245368216,\n \"acc_norm\": 0.4537037037037037,\n \"acc_norm_stderr\": 0.048129173245368216\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.558282208588957,\n \"acc_stderr\": 0.03901591825836184,\n \"acc_norm\": 0.558282208588957,\n \"acc_norm_stderr\": 0.03901591825836184\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.38392857142857145,\n \"acc_stderr\": 0.04616143075028547,\n \"acc_norm\": 0.38392857142857145,\n \"acc_norm_stderr\": 0.04616143075028547\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6019417475728155,\n \"acc_stderr\": 0.048467482539772386,\n \"acc_norm\": 0.6019417475728155,\n \"acc_norm_stderr\": 0.048467482539772386\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.6752136752136753,\n \"acc_stderr\": 0.03067902276549883,\n \"acc_norm\": 0.6752136752136753,\n \"acc_norm_stderr\": 0.03067902276549883\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.5747126436781609,\n \"acc_stderr\": 0.01767922548943145,\n \"acc_norm\": 0.5747126436781609,\n \"acc_norm_stderr\": 0.01767922548943145\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.4797687861271676,\n \"acc_stderr\": 0.026897049996382868,\n \"acc_norm\": 0.4797687861271676,\n \"acc_norm_stderr\": 0.026897049996382868\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24022346368715083,\n \"acc_stderr\": 0.014288343803925293,\n \"acc_norm\": 0.24022346368715083,\n \"acc_norm_stderr\": 0.014288343803925293\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.4477124183006536,\n \"acc_stderr\": 0.028472938478033522,\n \"acc_norm\": 0.4477124183006536,\n \"acc_norm_stderr\": 0.028472938478033522\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.4790996784565916,\n \"acc_stderr\": 0.028373270961069414,\n \"acc_norm\": 0.4790996784565916,\n \"acc_norm_stderr\": 0.028373270961069414\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.45987654320987653,\n \"acc_stderr\": 0.027731022753539277,\n \"acc_norm\": 0.45987654320987653,\n \"acc_norm_stderr\": 0.027731022753539277\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.32978723404255317,\n \"acc_stderr\": 0.028045946942042405,\n \"acc_norm\": 0.32978723404255317,\n \"acc_norm_stderr\": 0.028045946942042405\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3546284224250326,\n \"acc_stderr\": 0.01221857643909016,\n \"acc_norm\": 0.3546284224250326,\n \"acc_norm_stderr\": 0.01221857643909016\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.33455882352941174,\n \"acc_stderr\": 0.028661996202335307,\n \"acc_norm\": 0.33455882352941174,\n \"acc_norm_stderr\": 0.028661996202335307\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.434640522875817,\n \"acc_stderr\": 0.020054269200726452,\n \"acc_norm\": 0.434640522875817,\n \"acc_norm_stderr\": 0.020054269200726452\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5363636363636364,\n \"acc_stderr\": 0.04776449162396197,\n \"acc_norm\": 0.5363636363636364,\n \"acc_norm_stderr\": 0.04776449162396197\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.39591836734693875,\n \"acc_stderr\": 0.03130802899065685,\n \"acc_norm\": 0.39591836734693875,\n \"acc_norm_stderr\": 0.03130802899065685\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6218905472636815,\n \"acc_stderr\": 0.03428867848778658,\n \"acc_norm\": 0.6218905472636815,\n \"acc_norm_stderr\": 0.03428867848778658\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.40963855421686746,\n \"acc_stderr\": 0.03828401115079022,\n \"acc_norm\": 0.40963855421686746,\n \"acc_norm_stderr\": 0.03828401115079022\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.6023391812865497,\n \"acc_stderr\": 0.03753638955761691,\n \"acc_norm\": 0.6023391812865497,\n \"acc_norm_stderr\": 0.03753638955761691\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.22276621787025705,\n \"mc1_stderr\": 0.014566506961396736,\n \"mc2\": 0.35928086491565836,\n \"mc2_stderr\": 0.013539847732342817\n }\n}\n```", "repo_url": "https://huggingface.co/adept/persimmon-8b-chat", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|arc:challenge|25_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hellaswag|10_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T16-42-26.599502.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T16-42-26.599502.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T16_42_26.599502", "path": ["results_2023-10-11T16-42-26.599502.parquet"]}, {"split": "latest", "path": ["results_2023-10-11T16-42-26.599502.parquet"]}]}]}
|
2023-10-11T15:43:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of adept/persimmon-8b-chat
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model adept/persimmon-8b-chat on the Open LLM Leaderboard.
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-11T16:42:26.599502(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of adept/persimmon-8b-chat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model adept/persimmon-8b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-11T16:42:26.599502(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of adept/persimmon-8b-chat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model adept/persimmon-8b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-11T16:42:26.599502(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
18,
31,
166,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of adept/persimmon-8b-chat## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model adept/persimmon-8b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-11T16:42:26.599502(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
7b3ddb9c3a01f772193aaa0cf578488798627b6b
|
**Dataset Name:** Long-Form Article Summarization Dataset
**Description:**
The Long-Form Article Summarization Dataset is meticulously curated for the purpose of fine-tuning Natural Language Processing (NLP) models specifically tailored for summarization tasks. It is a rich collection of long-form articles that have been carefully condensed and summarized. The dataset provides a diverse range of topics and writing styles, making it an invaluable resource for researchers and practitioners working on summarization algorithms and applications.
**Data Sources:**
1. **Billsum:** This dataset includes summaries of U.S. congressional and state bills, providing insights into legislative documents.
2. **Scientific Papers:** A collection of scientific papers covering various disciplines, enabling a deep dive into research-oriented content.
3. **Multi_news:** This dataset incorporates news articles, offering a blend of current events and journalistic writing styles.
4. **CCDV/Pubmed-Summarization:** Focused on biomedical literature, this dataset contains summaries from Pubmed articles, offering specialized content related to the field of medicine and life sciences.
**Data Combination:**
The Long-Form Article Summarization Dataset is an amalgamation of the above-mentioned datasets. By combining these diverse sources, the dataset achieves a comprehensive coverage of topics, styles, and domains. This fusion enhances the dataset's versatility and applicability across a wide array of domains, making it a valuable asset for NLP research and development.
**Data Preprocessing:**
To ensure equal representation of unique domains and to manage the scale of the dataset, large datasets were down-sampled. This meticulous preprocessing step guarantees that each domain is adequately represented, promoting a balanced and unbiased training environment for NLP models.
**Intended Use:**
This dataset is specifically designed for fine-tuning NLP models focused on summarization tasks. Researchers and developers can utilize this dataset to train and evaluate their algorithms for generating concise and informative summaries from long-form articles. The dataset's diverse origins and careful preprocessing make it an ideal choice for enhancing the summarization capabilities of NLP models.
**Access:**
The Long-Form Article Summarization Dataset is available for research purposes and can be accessed through authorized channels. Researchers and developers interested in using this dataset are encouraged to adhere to ethical guidelines and data usage policies governing the respective sources.
**Citation:**
Researchers and practitioners are expected to cite the original sources of the datasets used in this amalgamation, namely "Billsum," "Scientific Papers," "Multi_news," and "CCDV/Pubmed-Summarization," in addition to acknowledging the creation of the Long-Form Article Summarization Dataset in their publications and research outputs.
This dataset card provides an overview of the Long-Form Article Summarization Dataset, outlining its sources, preprocessing methods, intended use, and access guidelines, ensuring transparent and responsible utilization of the valuable data it encapsulates.
|
vgoldberg/longform_article_summarization
|
[
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-10-11T16:01:42+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["summarization"], "pretty_name": "Long-Form Article Summarization Dataset", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2243293725, "num_examples": 105256}], "download_size": 880664627, "dataset_size": 2243293725}}
|
2023-10-11T18:36:28+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-summarization #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us
|
Dataset Name: Long-Form Article Summarization Dataset
Description:
The Long-Form Article Summarization Dataset is meticulously curated for the purpose of fine-tuning Natural Language Processing (NLP) models specifically tailored for summarization tasks. It is a rich collection of long-form articles that have been carefully condensed and summarized. The dataset provides a diverse range of topics and writing styles, making it an invaluable resource for researchers and practitioners working on summarization algorithms and applications.
Data Sources:
1. Billsum: This dataset includes summaries of U.S. congressional and state bills, providing insights into legislative documents.
2. Scientific Papers: A collection of scientific papers covering various disciplines, enabling a deep dive into research-oriented content.
3. Multi_news: This dataset incorporates news articles, offering a blend of current events and journalistic writing styles.
4. CCDV/Pubmed-Summarization: Focused on biomedical literature, this dataset contains summaries from Pubmed articles, offering specialized content related to the field of medicine and life sciences.
Data Combination:
The Long-Form Article Summarization Dataset is an amalgamation of the above-mentioned datasets. By combining these diverse sources, the dataset achieves a comprehensive coverage of topics, styles, and domains. This fusion enhances the dataset's versatility and applicability across a wide array of domains, making it a valuable asset for NLP research and development.
Data Preprocessing:
To ensure equal representation of unique domains and to manage the scale of the dataset, large datasets were down-sampled. This meticulous preprocessing step guarantees that each domain is adequately represented, promoting a balanced and unbiased training environment for NLP models.
Intended Use:
This dataset is specifically designed for fine-tuning NLP models focused on summarization tasks. Researchers and developers can utilize this dataset to train and evaluate their algorithms for generating concise and informative summaries from long-form articles. The dataset's diverse origins and careful preprocessing make it an ideal choice for enhancing the summarization capabilities of NLP models.
Access:
The Long-Form Article Summarization Dataset is available for research purposes and can be accessed through authorized channels. Researchers and developers interested in using this dataset are encouraged to adhere to ethical guidelines and data usage policies governing the respective sources.
Citation:
Researchers and practitioners are expected to cite the original sources of the datasets used in this amalgamation, namely "Billsum," "Scientific Papers," "Multi_news," and "CCDV/Pubmed-Summarization," in addition to acknowledging the creation of the Long-Form Article Summarization Dataset in their publications and research outputs.
This dataset card provides an overview of the Long-Form Article Summarization Dataset, outlining its sources, preprocessing methods, intended use, and access guidelines, ensuring transparent and responsible utilization of the valuable data it encapsulates.
|
[] |
[
"TAGS\n#task_categories-summarization #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n"
] |
[
40
] |
[
"passage: TAGS\n#task_categories-summarization #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n"
] |
d5f8204413aaa198910452f495f54dd8427fa5f1
|
# Dataset Card for "qrecc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
namespace-Pt/qrecc-corpus
|
[
"region:us"
] |
2023-10-11T16:20:54+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 84244312900, "num_examples": 54573064}], "download_size": 21571487893, "dataset_size": 84244312900}}
|
2023-10-12T02:17:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "qrecc"
More Information needed
|
[
"# Dataset Card for \"qrecc\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"qrecc\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"qrecc\"\n\nMore Information needed"
] |
dcb1ebd8ff6db699341632ba11e1c2c89025af70
|
Task: MCQ with single correct answer.
Dataset: Recommendation of datasets to validate a research question.
This dataset is derived from the [DataFinder](https://aclanthology.org/2023.acl-long.573/) dataset. We curate the abstracts of each dataset from [PapersWithCode](https://paperswithcode.com/datasets).
Given is a short `query` discussing a research question, and keyphrases relevant the query.
The original training set of the DataFinder dataset has positive and negative candidates for each query, to train a contrastive model.
We objective is to convert the dataset into a MCQ question-answering task with a single correct answer. We also add the abstracts from the research papers introducing the datasets so that context can be provided to the models.
To reproduce the construction of this dataset, please visit [https://github.com/shruti-singh/scidata_recommendation](https://github.com/shruti-singh/scidata_recommendation).
Please note that the query instances in this dataset have no intersection with the [`dataset_recommendation_mcq_mc`](https://huggingface.co/datasets/shrutisingh/dataset_recommendation_mcq_mc) dataset.
|
shrutisingh/dataset_recommendation_mcq_sc
|
[
"license:apache-2.0",
"region:us"
] |
2023-10-11T16:25:45+00:00
|
{"license": "apache-2.0"}
|
2023-10-12T16:14:33+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
Task: MCQ with single correct answer.
Dataset: Recommendation of datasets to validate a research question.
This dataset is derived from the DataFinder dataset. We curate the abstracts of each dataset from PapersWithCode.
Given is a short 'query' discussing a research question, and keyphrases relevant the query.
The original training set of the DataFinder dataset has positive and negative candidates for each query, to train a contrastive model.
We objective is to convert the dataset into a MCQ question-answering task with a single correct answer. We also add the abstracts from the research papers introducing the datasets so that context can be provided to the models.
To reproduce the construction of this dataset, please visit URL
Please note that the query instances in this dataset have no intersection with the 'dataset_recommendation_mcq_mc' dataset.
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
93f0ec4fbc6ac1198668871b3e7a411c57c1c260
|
# EC40 MNMT Dataset
GitHub: https://github.com/Smu-Tan/ZS-NMT-Variations/tree/main
### EC40 is an English-Centric Multilingual Machine Translation Dataset. It has over 60 Million sentences including 40 Languages across 5 Language Families.
#### Note: The dataset is cleaned and pre-processed using tools like Moses, for more details, please refer to the paper.
### Features:
1. We carefully balanced the dataset across resources and languages by strictly maintaining each resource group containing 5 language families and each family consists of 8 representative languages.
2. EC40 covers a wide spectrum of resource availability, ranging from High(5M) to Medium(1M), Low(100K), and extremely-Low(50K) resources.
3. In total, there are 80 English-centric directions for training and 1,640 directions (including all supervised and ZS directions) for evaluation.
4. We make use of Ntrex-128 and Flores-200 as our validation and test set.
-----
## Languages and Family
| Family | Languges |
| :--- | :---: |
| Germanic | Geman, Dutch, Swedish, Danish, Afrikaans, Luxembourgish, Norwegian, Icelandic |
| Romance | French, Spanish, Italian, Portuguese, Romanian, Occitan, Asturian, Catalan |
| Slavic | Russian, Czech, Polish, Bulgarian, Ukrainian, Serbian, Belarusian, Bosnian |
| Indo-Aryan | Hindi, Bengali, Kannada, Marathi, Sindhi, Gujarati, Nepali, Urdu |
-----
## Dataset Stats
| Resource | Languages | Size |
| --- | --- | --- |
| High | de, nl, fr, es, ru, cs, hi, bn, ar, he | 5M |
| Medium | sv, da, it, pt, pl, bg, kn, mr, mt, ha | 1M |
| Low | af, lb, ro, oc, uk, sr, sd, gu, ti, am | 100k |
| Extremely-Low | no, is, ast, ca, be, bs, ne, ur, kab, so | 50k |
-----
## Build Fairseq dataset (Shard->to avoid RAM OOM)
```
Read toolkit/build_fairseq_sharded_dataset.sh
```
<br>
-----
## Train mTransformer-Large baseline
```
Read toolkit/train-EC40-mTrans-large.sh
```
|
ShaomuTan/EC40
|
[
"region:us"
] |
2023-10-11T16:27:27+00:00
|
{}
|
2023-10-11T18:43:00+00:00
|
[] |
[] |
TAGS
#region-us
|
EC40 MNMT Dataset
=================
GitHub: URL
### EC40 is an English-Centric Multilingual Machine Translation Dataset. It has over 60 Million sentences including 40 Languages across 5 Language Families.
#### Note: The dataset is cleaned and pre-processed using tools like Moses, for more details, please refer to the paper.
### Features:
1. We carefully balanced the dataset across resources and languages by strictly maintaining each resource group containing 5 language families and each family consists of 8 representative languages.
2. EC40 covers a wide spectrum of resource availability, ranging from High(5M) to Medium(1M), Low(100K), and extremely-Low(50K) resources.
3. In total, there are 80 English-centric directions for training and 1,640 directions (including all supervised and ZS directions) for evaluation.
4. We make use of Ntrex-128 and Flores-200 as our validation and test set.
---
Languages and Family
--------------------
---
Dataset Stats
-------------
Resource: High, Languages: de, nl, fr, es, ru, cs, hi, bn, ar, he, Size: 5M
Resource: Medium, Languages: sv, da, it, pt, pl, bg, kn, mr, mt, ha, Size: 1M
Resource: Low, Languages: af, lb, ro, oc, uk, sr, sd, gu, ti, am, Size: 100k
Resource: Extremely-Low, Languages: no, is, ast, ca, be, bs, ne, ur, kab, so, Size: 50k
---
Build Fairseq dataset (Shard->to avoid RAM OOM)
-----------------------------------------------
---
Train mTransformer-Large baseline
---------------------------------
|
[
"### EC40 is an English-Centric Multilingual Machine Translation Dataset. It has over 60 Million sentences including 40 Languages across 5 Language Families.",
"#### Note: The dataset is cleaned and pre-processed using tools like Moses, for more details, please refer to the paper.",
"### Features:\n\n\n1. We carefully balanced the dataset across resources and languages by strictly maintaining each resource group containing 5 language families and each family consists of 8 representative languages.\n2. EC40 covers a wide spectrum of resource availability, ranging from High(5M) to Medium(1M), Low(100K), and extremely-Low(50K) resources.\n3. In total, there are 80 English-centric directions for training and 1,640 directions (including all supervised and ZS directions) for evaluation.\n4. We make use of Ntrex-128 and Flores-200 as our validation and test set.\n\n\n\n\n---\n\n\nLanguages and Family\n--------------------\n\n\n\n\n\n---\n\n\nDataset Stats\n-------------\n\n\nResource: High, Languages: de, nl, fr, es, ru, cs, hi, bn, ar, he, Size: 5M\nResource: Medium, Languages: sv, da, it, pt, pl, bg, kn, mr, mt, ha, Size: 1M\nResource: Low, Languages: af, lb, ro, oc, uk, sr, sd, gu, ti, am, Size: 100k\nResource: Extremely-Low, Languages: no, is, ast, ca, be, bs, ne, ur, kab, so, Size: 50k\n\n\n\n\n---\n\n\nBuild Fairseq dataset (Shard->to avoid RAM OOM)\n-----------------------------------------------\n\n\n \n\n\n\n---\n\n\nTrain mTransformer-Large baseline\n---------------------------------"
] |
[
"TAGS\n#region-us \n",
"### EC40 is an English-Centric Multilingual Machine Translation Dataset. It has over 60 Million sentences including 40 Languages across 5 Language Families.",
"#### Note: The dataset is cleaned and pre-processed using tools like Moses, for more details, please refer to the paper.",
"### Features:\n\n\n1. We carefully balanced the dataset across resources and languages by strictly maintaining each resource group containing 5 language families and each family consists of 8 representative languages.\n2. EC40 covers a wide spectrum of resource availability, ranging from High(5M) to Medium(1M), Low(100K), and extremely-Low(50K) resources.\n3. In total, there are 80 English-centric directions for training and 1,640 directions (including all supervised and ZS directions) for evaluation.\n4. We make use of Ntrex-128 and Flores-200 as our validation and test set.\n\n\n\n\n---\n\n\nLanguages and Family\n--------------------\n\n\n\n\n\n---\n\n\nDataset Stats\n-------------\n\n\nResource: High, Languages: de, nl, fr, es, ru, cs, hi, bn, ar, he, Size: 5M\nResource: Medium, Languages: sv, da, it, pt, pl, bg, kn, mr, mt, ha, Size: 1M\nResource: Low, Languages: af, lb, ro, oc, uk, sr, sd, gu, ti, am, Size: 100k\nResource: Extremely-Low, Languages: no, is, ast, ca, be, bs, ne, ur, kab, so, Size: 50k\n\n\n\n\n---\n\n\nBuild Fairseq dataset (Shard->to avoid RAM OOM)\n-----------------------------------------------\n\n\n \n\n\n\n---\n\n\nTrain mTransformer-Large baseline\n---------------------------------"
] |
[
6,
35,
30,
327
] |
[
"passage: TAGS\n#region-us \n### EC40 is an English-Centric Multilingual Machine Translation Dataset. It has over 60 Million sentences including 40 Languages across 5 Language Families.#### Note: The dataset is cleaned and pre-processed using tools like Moses, for more details, please refer to the paper.### Features:\n\n\n1. We carefully balanced the dataset across resources and languages by strictly maintaining each resource group containing 5 language families and each family consists of 8 representative languages.\n2. EC40 covers a wide spectrum of resource availability, ranging from High(5M) to Medium(1M), Low(100K), and extremely-Low(50K) resources.\n3. In total, there are 80 English-centric directions for training and 1,640 directions (including all supervised and ZS directions) for evaluation.\n4. We make use of Ntrex-128 and Flores-200 as our validation and test set.\n\n\n\n\n---\n\n\nLanguages and Family\n--------------------\n\n\n\n\n\n---\n\n\nDataset Stats\n-------------\n\n\nResource: High, Languages: de, nl, fr, es, ru, cs, hi, bn, ar, he, Size: 5M\nResource: Medium, Languages: sv, da, it, pt, pl, bg, kn, mr, mt, ha, Size: 1M\nResource: Low, Languages: af, lb, ro, oc, uk, sr, sd, gu, ti, am, Size: 100k\nResource: Extremely-Low, Languages: no, is, ast, ca, be, bs, ne, ur, kab, so, Size: 50k\n\n\n\n\n---\n\n\nBuild Fairseq dataset (Shard->to avoid RAM OOM)\n-----------------------------------------------\n\n\n \n\n\n\n---\n\n\nTrain mTransformer-Large baseline\n---------------------------------"
] |
9566851f347081ff7799a7d4df829c4fb30bfddb
|
# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench9
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Undi95/Mistral-11B-TestBench9
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Undi95/Mistral-11B-TestBench9](https://huggingface.co/Undi95/Mistral-11B-TestBench9) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench9_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-07T07:27:56.824577](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench9_public/blob/main/results_2023-11-07T07-27-56.824577.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.018351510067114093,
"em_stderr": 0.0013745278884539388,
"f1": 0.08351719798657717,
"f1_stderr": 0.0019210059131140958,
"acc": 0.4730081804816138,
"acc_stderr": 0.010845627369096797
},
"harness|drop|3": {
"em": 0.018351510067114093,
"em_stderr": 0.0013745278884539388,
"f1": 0.08351719798657717,
"f1_stderr": 0.0019210059131140958
},
"harness|gsm8k|5": {
"acc": 0.16148597422289612,
"acc_stderr": 0.01013595945213431
},
"harness|winogrande|5": {
"acc": 0.7845303867403315,
"acc_stderr": 0.011555295286059282
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench9
|
[
"region:us"
] |
2023-10-11T16:38:44+00:00
|
{"pretty_name": "Evaluation run of Undi95/Mistral-11B-TestBench9", "dataset_summary": "Dataset automatically created during the evaluation run of model [Undi95/Mistral-11B-TestBench9](https://huggingface.co/Undi95/Mistral-11B-TestBench9) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench9_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-07T07:27:56.824577](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench9_public/blob/main/results_2023-11-07T07-27-56.824577.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.018351510067114093,\n \"em_stderr\": 0.0013745278884539388,\n \"f1\": 0.08351719798657717,\n \"f1_stderr\": 0.0019210059131140958,\n \"acc\": 0.4730081804816138,\n \"acc_stderr\": 0.010845627369096797\n },\n \"harness|drop|3\": {\n \"em\": 0.018351510067114093,\n \"em_stderr\": 0.0013745278884539388,\n \"f1\": 0.08351719798657717,\n \"f1_stderr\": 0.0019210059131140958\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.16148597422289612,\n \"acc_stderr\": 0.01013595945213431\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7845303867403315,\n \"acc_stderr\": 0.011555295286059282\n }\n}\n```", "repo_url": "https://huggingface.co/Undi95/Mistral-11B-TestBench9", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_05T11_48_18.495920", "path": ["**/details_harness|drop|3_2023-11-05T11-48-18.495920.parquet"]}, {"split": "2023_11_07T07_27_56.824577", "path": ["**/details_harness|drop|3_2023-11-07T07-27-56.824577.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-07T07-27-56.824577.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_05T11_48_18.495920", "path": ["**/details_harness|gsm8k|5_2023-11-05T11-48-18.495920.parquet"]}, {"split": "2023_11_07T07_27_56.824577", "path": ["**/details_harness|gsm8k|5_2023-11-07T07-27-56.824577.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-07T07-27-56.824577.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_05T11_48_18.495920", "path": ["**/details_harness|winogrande|5_2023-11-05T11-48-18.495920.parquet"]}, {"split": "2023_11_07T07_27_56.824577", "path": ["**/details_harness|winogrande|5_2023-11-07T07-27-56.824577.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-07T07-27-56.824577.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_05T11_48_18.495920", "path": ["results_2023-11-05T11-48-18.495920.parquet"]}, {"split": "2023_11_07T07_27_56.824577", "path": ["results_2023-11-07T07-27-56.824577.parquet"]}, {"split": "latest", "path": ["results_2023-11-07T07-27-56.824577.parquet"]}]}]}
|
2023-12-01T14:34:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench9
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench9 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-07T07:27:56.824577(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench9",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench9 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-07T07:27:56.824577(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench9",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench9 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-07T07:27:56.824577(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench9## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench9 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-07T07:27:56.824577(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
956cebd7f5f9a0fed46de1e697c5905921683fa0
|
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial-unit-080091",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T19:36:48.099081](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial-unit-080091/blob/main/results_2023-10-24T19-36-48.099081.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.004718959731543624,
"em_stderr": 0.0007018360183131064,
"f1": 0.06776950503355726,
"f1_stderr": 0.00157312938548866,
"acc": 0.4068737946340684,
"acc_stderr": 0.00984774370435679
},
"harness|drop|3": {
"em": 0.004718959731543624,
"em_stderr": 0.0007018360183131064,
"f1": 0.06776950503355726,
"f1_stderr": 0.00157312938548866
},
"harness|gsm8k|5": {
"acc": 0.07657316148597422,
"acc_stderr": 0.007324564881451574
},
"harness|winogrande|5": {
"acc": 0.7371744277821626,
"acc_stderr": 0.012370922527262008
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial-unit-080091
|
[
"region:us"
] |
2023-10-11T16:39:29+00:00
|
{"pretty_name": "Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091", "dataset_summary": "Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial-unit-080091\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T19:36:48.099081](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial-unit-080091/blob/main/results_2023-10-24T19-36-48.099081.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.004718959731543624,\n \"em_stderr\": 0.0007018360183131064,\n \"f1\": 0.06776950503355726,\n \"f1_stderr\": 0.00157312938548866,\n \"acc\": 0.4068737946340684,\n \"acc_stderr\": 0.00984774370435679\n },\n \"harness|drop|3\": {\n \"em\": 0.004718959731543624,\n \"em_stderr\": 0.0007018360183131064,\n \"f1\": 0.06776950503355726,\n \"f1_stderr\": 0.00157312938548866\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07657316148597422,\n \"acc_stderr\": 0.007324564881451574\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7371744277821626,\n \"acc_stderr\": 0.012370922527262008\n }\n}\n```", "repo_url": "https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|arc:challenge|25_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T19_36_48.099081", "path": ["**/details_harness|drop|3_2023-10-24T19-36-48.099081.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T19-36-48.099081.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T19_36_48.099081", "path": ["**/details_harness|gsm8k|5_2023-10-24T19-36-48.099081.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T19-36-48.099081.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hellaswag|10_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T17-39-05.539335.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T17-39-05.539335.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T17-39-05.539335.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T19_36_48.099081", "path": ["**/details_harness|winogrande|5_2023-10-24T19-36-48.099081.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T19-36-48.099081.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T17_39_05.539335", "path": ["results_2023-10-11T17-39-05.539335.parquet"]}, {"split": "2023_10_24T19_36_48.099081", "path": ["results_2023-10-24T19-36-48.099081.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T19-36-48.099081.parquet"]}]}]}
|
2023-10-24T18:37:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-24T19:36:48.099081(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T19:36:48.099081(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T19:36:48.099081(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
32,
31,
180,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080091 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T19:36:48.099081(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
7d4a0445bb28051fd7058e5498eebbf35c898de3
|
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial-unit-080082",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-23T18:02:30.843384](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial-unit-080082/blob/main/results_2023-10-23T18-02-30.843384.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.004718959731543624,
"em_stderr": 0.0007018360183131064,
"f1": 0.06889890939597328,
"f1_stderr": 0.0015900969200350048,
"acc": 0.4076319447477909,
"acc_stderr": 0.009880788504185114
},
"harness|drop|3": {
"em": 0.004718959731543624,
"em_stderr": 0.0007018360183131064,
"f1": 0.06889890939597328,
"f1_stderr": 0.0015900969200350048
},
"harness|gsm8k|5": {
"acc": 0.07808946171341925,
"acc_stderr": 0.007390654481108218
},
"harness|winogrande|5": {
"acc": 0.7371744277821626,
"acc_stderr": 0.012370922527262008
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial-unit-080082
|
[
"region:us"
] |
2023-10-11T16:45:48+00:00
|
{"pretty_name": "Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082", "dataset_summary": "Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial-unit-080082\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-23T18:02:30.843384](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial-unit-080082/blob/main/results_2023-10-23T18-02-30.843384.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.004718959731543624,\n \"em_stderr\": 0.0007018360183131064,\n \"f1\": 0.06889890939597328,\n \"f1_stderr\": 0.0015900969200350048,\n \"acc\": 0.4076319447477909,\n \"acc_stderr\": 0.009880788504185114\n },\n \"harness|drop|3\": {\n \"em\": 0.004718959731543624,\n \"em_stderr\": 0.0007018360183131064,\n \"f1\": 0.06889890939597328,\n \"f1_stderr\": 0.0015900969200350048\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07808946171341925,\n \"acc_stderr\": 0.007390654481108218\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7371744277821626,\n \"acc_stderr\": 0.012370922527262008\n }\n}\n```", "repo_url": "https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|arc:challenge|25_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_23T18_02_30.843384", "path": ["**/details_harness|drop|3_2023-10-23T18-02-30.843384.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-23T18-02-30.843384.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_23T18_02_30.843384", "path": ["**/details_harness|gsm8k|5_2023-10-23T18-02-30.843384.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-23T18-02-30.843384.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hellaswag|10_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T17-45-25.017539.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T17-45-25.017539.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T17-45-25.017539.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_23T18_02_30.843384", "path": ["**/details_harness|winogrande|5_2023-10-23T18-02-30.843384.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-23T18-02-30.843384.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T17_45_25.017539", "path": ["results_2023-10-11T17-45-25.017539.parquet"]}, {"split": "2023_10_23T18_02_30.843384", "path": ["results_2023-10-23T18-02-30.843384.parquet"]}, {"split": "latest", "path": ["results_2023-10-23T18-02-30.843384.parquet"]}]}]}
|
2023-10-23T17:02:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-23T18:02:30.843384(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-23T18:02:30.843384(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-23T18:02:30.843384(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
32,
31,
180,
68,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-timedial-unit-080082 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-23T18:02:30.843384(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
7dd4babe2c729872ab1e34dfc02b24ae798b4d42
|
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mistral-7b-v13.1](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-27T07:58:51.376578](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13.1/blob/main/results_2023-10-27T07-58-51.376578.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.28953439597315433,
"em_stderr": 0.004644738434561709,
"f1": 0.3589429530201351,
"f1_stderr": 0.004562237952673667,
"acc": 0.401525754664538,
"acc_stderr": 0.010223042101778138
},
"harness|drop|3": {
"em": 0.28953439597315433,
"em_stderr": 0.004644738434561709,
"f1": 0.3589429530201351,
"f1_stderr": 0.004562237952673667
},
"harness|gsm8k|5": {
"acc": 0.08718726307808947,
"acc_stderr": 0.007770691416783547
},
"harness|winogrande|5": {
"acc": 0.7158642462509865,
"acc_stderr": 0.01267539278677273
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13.1
|
[
"region:us"
] |
2023-10-11T16:55:36+00:00
|
{"pretty_name": "Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mistral-7b-v13.1](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-27T07:58:51.376578](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13.1/blob/main/results_2023-10-27T07-58-51.376578.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.28953439597315433,\n \"em_stderr\": 0.004644738434561709,\n \"f1\": 0.3589429530201351,\n \"f1_stderr\": 0.004562237952673667,\n \"acc\": 0.401525754664538,\n \"acc_stderr\": 0.010223042101778138\n },\n \"harness|drop|3\": {\n \"em\": 0.28953439597315433,\n \"em_stderr\": 0.004644738434561709,\n \"f1\": 0.3589429530201351,\n \"f1_stderr\": 0.004562237952673667\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.08718726307808947,\n \"acc_stderr\": 0.007770691416783547\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7158642462509865,\n \"acc_stderr\": 0.01267539278677273\n }\n}\n```", "repo_url": "https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|arc:challenge|25_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_27T07_58_51.376578", "path": ["**/details_harness|drop|3_2023-10-27T07-58-51.376578.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-27T07-58-51.376578.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_27T07_58_51.376578", "path": ["**/details_harness|gsm8k|5_2023-10-27T07-58-51.376578.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-27T07-58-51.376578.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hellaswag|10_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T17-55-12.784881.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T17-55-12.784881.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T17-55-12.784881.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_27T07_58_51.376578", "path": ["**/details_harness|winogrande|5_2023-10-27T07-58-51.376578.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-27T07-58-51.376578.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T17_55_12.784881", "path": ["results_2023-10-11T17-55-12.784881.parquet"]}, {"split": "2023_10_27T07_58_51.376578", "path": ["results_2023-10-27T07-58-51.376578.parquet"]}, {"split": "latest", "path": ["results_2023-10-27T07-58-51.376578.parquet"]}]}]}
|
2023-10-27T06:59:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13.1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mistral-7b-v13.1 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-27T07:58:51.376578(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mistral-7b-v13.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-27T07:58:51.376578(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mistral-7b-v13.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-27T07:58:51.376578(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
25,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13.1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mistral-7b-v13.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-27T07:58:51.376578(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
ce15a1e88fe0c57f7755dd2f438360a7cadf4851
|
# Dataset Card for "plmn2.5l"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
csupiisc/plmn2.5l
|
[
"region:us"
] |
2023-10-11T16:56:24+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 753808, "num_examples": 10000}], "download_size": 299024, "dataset_size": 753808}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-11T16:56:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "plmn2.5l"
More Information needed
|
[
"# Dataset Card for \"plmn2.5l\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"plmn2.5l\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"plmn2.5l\"\n\nMore Information needed"
] |
00f2afd772b0b2b04eb0131faae87cbae25f7e11
|
# Dataset Card for "plmn3.5l"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
csupiisc/plmn3.5l
|
[
"region:us"
] |
2023-10-11T16:56:26+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 754449, "num_examples": 10000}], "download_size": 300127, "dataset_size": 754449}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-11T16:56:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "plmn3.5l"
More Information needed
|
[
"# Dataset Card for \"plmn3.5l\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"plmn3.5l\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"plmn3.5l\"\n\nMore Information needed"
] |
0fae9959e258ec5f59a533f280335183a43b7341
|
# Dataset Card for Evaluation run of jphme/em_german_leo_mistral
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/jphme/em_german_leo_mistral
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [jphme/em_german_leo_mistral](https://huggingface.co/jphme/em_german_leo_mistral) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_jphme__em_german_leo_mistral",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-26T05:35:49.227572](https://huggingface.co/datasets/open-llm-leaderboard/details_jphme__em_german_leo_mistral/blob/main/results_2023-10-26T05-35-49.227572.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.2305998322147651,
"em_stderr": 0.004313653760724557,
"f1": 0.2864733640939601,
"f1_stderr": 0.004317447810452205,
"acc": 0.3954548691248602,
"acc_stderr": 0.009372608948757369
},
"harness|drop|3": {
"em": 0.2305998322147651,
"em_stderr": 0.004313653760724557,
"f1": 0.2864733640939601,
"f1_stderr": 0.004317447810452205
},
"harness|gsm8k|5": {
"acc": 0.056103108415466264,
"acc_stderr": 0.00633866843132188
},
"harness|winogrande|5": {
"acc": 0.7348066298342542,
"acc_stderr": 0.012406549466192858
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_jphme__em_german_leo_mistral
|
[
"region:us"
] |
2023-10-11T16:57:58+00:00
|
{"pretty_name": "Evaluation run of jphme/em_german_leo_mistral", "dataset_summary": "Dataset automatically created during the evaluation run of model [jphme/em_german_leo_mistral](https://huggingface.co/jphme/em_german_leo_mistral) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jphme__em_german_leo_mistral\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-26T05:35:49.227572](https://huggingface.co/datasets/open-llm-leaderboard/details_jphme__em_german_leo_mistral/blob/main/results_2023-10-26T05-35-49.227572.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.2305998322147651,\n \"em_stderr\": 0.004313653760724557,\n \"f1\": 0.2864733640939601,\n \"f1_stderr\": 0.004317447810452205,\n \"acc\": 0.3954548691248602,\n \"acc_stderr\": 0.009372608948757369\n },\n \"harness|drop|3\": {\n \"em\": 0.2305998322147651,\n \"em_stderr\": 0.004313653760724557,\n \"f1\": 0.2864733640939601,\n \"f1_stderr\": 0.004317447810452205\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.056103108415466264,\n \"acc_stderr\": 0.00633866843132188\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7348066298342542,\n \"acc_stderr\": 0.012406549466192858\n }\n}\n```", "repo_url": "https://huggingface.co/jphme/em_german_leo_mistral", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|arc:challenge|25_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_26T05_35_49.227572", "path": ["**/details_harness|drop|3_2023-10-26T05-35-49.227572.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-26T05-35-49.227572.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_26T05_35_49.227572", "path": ["**/details_harness|gsm8k|5_2023-10-26T05-35-49.227572.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-26T05-35-49.227572.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hellaswag|10_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T17-57-34.404631.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T17-57-34.404631.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T17-57-34.404631.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_26T05_35_49.227572", "path": ["**/details_harness|winogrande|5_2023-10-26T05-35-49.227572.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-26T05-35-49.227572.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T17_57_34.404631", "path": ["results_2023-10-11T17-57-34.404631.parquet"]}, {"split": "2023_10_26T05_35_49.227572", "path": ["results_2023-10-26T05-35-49.227572.parquet"]}, {"split": "latest", "path": ["results_2023-10-26T05-35-49.227572.parquet"]}]}]}
|
2023-10-26T04:36:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of jphme/em_german_leo_mistral
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model jphme/em_german_leo_mistral on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-26T05:35:49.227572(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of jphme/em_german_leo_mistral",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model jphme/em_german_leo_mistral on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-26T05:35:49.227572(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of jphme/em_german_leo_mistral",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model jphme/em_german_leo_mistral on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-26T05:35:49.227572(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
22,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of jphme/em_german_leo_mistral## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model jphme/em_german_leo_mistral on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-26T05:35:49.227572(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
f03f36a00bfc725681332a1e0a306e535c5f8fd2
|
# Dataset Card for "test_da_xlmr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/test_da_xlmr
|
[
"region:us"
] |
2023-10-11T17:15:48+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 1281740030, "num_examples": 900000}], "download_size": 283712435, "dataset_size": 1281740030}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-11T17:16:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "test_da_xlmr"
More Information needed
|
[
"# Dataset Card for \"test_da_xlmr\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"test_da_xlmr\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"test_da_xlmr\"\n\nMore Information needed"
] |
6e6688daefff93672bc03230ce3893a6d175808c
|
wlaminack/nonlineartesting2
def basic(array1):
x=(array1[0]-.5)
y=(array1[1]-.5)
z=(array1[2]-.5)
t=(array1[3]-.5)
r2=x*x+y*y+z*z+t*t
return 7*np.sin(9*r2)+np.random.random()*(array1[4]-.5)
f=np.apply_along_axis(basic, 1, a)
|
wlaminack/trainingnonlinear2
|
[
"license:apache-2.0",
"region:us"
] |
2023-10-11T17:31:08+00:00
|
{"license": "apache-2.0"}
|
2023-10-11T17:41:56+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
wlaminack/nonlineartesting2
def basic(array1):
x=(array1[0]-.5)
y=(array1[1]-.5)
z=(array1[2]-.5)
t=(array1[3]-.5)
r2=x*x+y*y+z*z+t*t
return 7*URL(9*r2)+URL()*(array1[4]-.5)
f=np.apply_along_axis(basic, 1, a)
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
490f9d72f59e2599b5d1eca5973b37a913270445
|
# Dataset Card for Synthetic CSAW 100k Mammograms
## Dataset Description
This is a synthetic mammogram dataset created with the latent diffusion model from *Generative AI for Medical Imaging: extending the MONAI Framework* paper.
The generative model was trained on the [CSAW-M dataset](https://arxiv.org/abs/2112.01330).
- **Paper: https://arxiv.org/abs/2307.15208
- **Point of Contact: [email protected]
### Dataset Summary
### Supported Tasks
Classification masking of cancer in mammogram.
The dataset contains 100k synthetic mammograms with 3 labels:
- "Low masking level" (score <= 2),
- "Medium masking level" (2 < score <= 6),
- "High masking level" (score > 6).
## Dataset Structure
- Images
- CSAW-M Labels
### Data Splits
We did not define data splits.
## Dataset Creation
We generated the synthetic data samples using the diffusion model finetuned on the [CSAW-M dataset](https://arxiv.org/abs/2112.01330).
### Personal and Sensitive Information
Following GDPR "Personal data is any information that relates to an identified or identifiable living individual."
We make sure that there are not "personal data" (re-identifiable information) by filtering with a deep learning model trained for identifying patients.
## Considerations for Using the Data
### Social Impact of Dataset
We hope that this dataset can used to enhance AI models training for cancer masking.
### Discussion of Biases
There are biases towards specific pathologies.
## Additional Information
### Dataset Curators
### Licensing Information
This dataset is released under the [Open & Responsible AI license ("OpenRAIL")](https://huggingface.co/blog/open_rail)
### Citation Information
Pinaya, W. H., Graham, M. S., Kerfoot, E., Tudosiu, P. D., Dafflon, J., Fernandez, V., ... & Cardoso, M. J. (2023). Generative ai for medical imaging: extending the monai framework. arXiv preprint arXiv:2307.15208.
https://arxiv.org/abs/2307.15208
|
SinKove/synthetic_mammography_csaw
|
[
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:openrail",
"medical",
"arxiv:2112.01330",
"arxiv:2307.15208",
"doi:10.57967/hf/1254",
"region:us"
] |
2023-10-11T17:50:12+00:00
|
{"license": "openrail", "size_categories": ["10K<n<100K"], "task_categories": ["image-classification"], "pretty_name": "C", "tags": ["medical"]}
|
2023-10-11T20:04:10+00:00
|
[
"2112.01330",
"2307.15208"
] |
[] |
TAGS
#task_categories-image-classification #size_categories-10K<n<100K #license-openrail #medical #arxiv-2112.01330 #arxiv-2307.15208 #doi-10.57967/hf/1254 #region-us
|
# Dataset Card for Synthetic CSAW 100k Mammograms
## Dataset Description
This is a synthetic mammogram dataset created with the latent diffusion model from *Generative AI for Medical Imaging: extending the MONAI Framework* paper.
The generative model was trained on the CSAW-M dataset.
- Paper: URL
- Point of Contact: walter.diaz_sanz@URL
### Dataset Summary
### Supported Tasks
Classification masking of cancer in mammogram.
The dataset contains 100k synthetic mammograms with 3 labels:
- "Low masking level" (score <= 2),
- "Medium masking level" (2 < score <= 6),
- "High masking level" (score > 6).
## Dataset Structure
- Images
- CSAW-M Labels
### Data Splits
We did not define data splits.
## Dataset Creation
We generated the synthetic data samples using the diffusion model finetuned on the CSAW-M dataset.
### Personal and Sensitive Information
Following GDPR "Personal data is any information that relates to an identified or identifiable living individual."
We make sure that there are not "personal data" (re-identifiable information) by filtering with a deep learning model trained for identifying patients.
## Considerations for Using the Data
### Social Impact of Dataset
We hope that this dataset can used to enhance AI models training for cancer masking.
### Discussion of Biases
There are biases towards specific pathologies.
## Additional Information
### Dataset Curators
### Licensing Information
This dataset is released under the Open & Responsible AI license ("OpenRAIL")
Pinaya, W. H., Graham, M. S., Kerfoot, E., Tudosiu, P. D., Dafflon, J., Fernandez, V., ... & Cardoso, M. J. (2023). Generative ai for medical imaging: extending the monai framework. arXiv preprint arXiv:2307.15208.
URL
|
[
"# Dataset Card for Synthetic CSAW 100k Mammograms",
"## Dataset Description\n\nThis is a synthetic mammogram dataset created with the latent diffusion model from *Generative AI for Medical Imaging: extending the MONAI Framework* paper.\nThe generative model was trained on the CSAW-M dataset.\n\n- Paper: URL\n- Point of Contact: walter.diaz_sanz@URL",
"### Dataset Summary",
"### Supported Tasks\n\nClassification masking of cancer in mammogram. \nThe dataset contains 100k synthetic mammograms with 3 labels: \n- \"Low masking level\" (score <= 2), \n- \"Medium masking level\" (2 < score <= 6),\n- \"High masking level\" (score > 6).",
"## Dataset Structure\n\n- Images\n- CSAW-M Labels",
"### Data Splits\n\nWe did not define data splits.",
"## Dataset Creation\n\nWe generated the synthetic data samples using the diffusion model finetuned on the CSAW-M dataset.",
"### Personal and Sensitive Information\n\nFollowing GDPR \"Personal data is any information that relates to an identified or identifiable living individual.\"\n\nWe make sure that there are not \"personal data\" (re-identifiable information) by filtering with a deep learning model trained for identifying patients.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe hope that this dataset can used to enhance AI models training for cancer masking.",
"### Discussion of Biases\n\nThere are biases towards specific pathologies.",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nThis dataset is released under the Open & Responsible AI license (\"OpenRAIL\")\n\n\n\nPinaya, W. H., Graham, M. S., Kerfoot, E., Tudosiu, P. D., Dafflon, J., Fernandez, V., ... & Cardoso, M. J. (2023). Generative ai for medical imaging: extending the monai framework. arXiv preprint arXiv:2307.15208.\n\nURL"
] |
[
"TAGS\n#task_categories-image-classification #size_categories-10K<n<100K #license-openrail #medical #arxiv-2112.01330 #arxiv-2307.15208 #doi-10.57967/hf/1254 #region-us \n",
"# Dataset Card for Synthetic CSAW 100k Mammograms",
"## Dataset Description\n\nThis is a synthetic mammogram dataset created with the latent diffusion model from *Generative AI for Medical Imaging: extending the MONAI Framework* paper.\nThe generative model was trained on the CSAW-M dataset.\n\n- Paper: URL\n- Point of Contact: walter.diaz_sanz@URL",
"### Dataset Summary",
"### Supported Tasks\n\nClassification masking of cancer in mammogram. \nThe dataset contains 100k synthetic mammograms with 3 labels: \n- \"Low masking level\" (score <= 2), \n- \"Medium masking level\" (2 < score <= 6),\n- \"High masking level\" (score > 6).",
"## Dataset Structure\n\n- Images\n- CSAW-M Labels",
"### Data Splits\n\nWe did not define data splits.",
"## Dataset Creation\n\nWe generated the synthetic data samples using the diffusion model finetuned on the CSAW-M dataset.",
"### Personal and Sensitive Information\n\nFollowing GDPR \"Personal data is any information that relates to an identified or identifiable living individual.\"\n\nWe make sure that there are not \"personal data\" (re-identifiable information) by filtering with a deep learning model trained for identifying patients.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe hope that this dataset can used to enhance AI models training for cancer masking.",
"### Discussion of Biases\n\nThere are biases towards specific pathologies.",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nThis dataset is released under the Open & Responsible AI license (\"OpenRAIL\")\n\n\n\nPinaya, W. H., Graham, M. S., Kerfoot, E., Tudosiu, P. D., Dafflon, J., Fernandez, V., ... & Cardoso, M. J. (2023). Generative ai for medical imaging: extending the monai framework. arXiv preprint arXiv:2307.15208.\n\nURL"
] |
[
67,
16,
78,
6,
79,
14,
13,
32,
64,
8,
25,
19,
5,
6,
106
] |
[
"passage: TAGS\n#task_categories-image-classification #size_categories-10K<n<100K #license-openrail #medical #arxiv-2112.01330 #arxiv-2307.15208 #doi-10.57967/hf/1254 #region-us \n# Dataset Card for Synthetic CSAW 100k Mammograms## Dataset Description\n\nThis is a synthetic mammogram dataset created with the latent diffusion model from *Generative AI for Medical Imaging: extending the MONAI Framework* paper.\nThe generative model was trained on the CSAW-M dataset.\n\n- Paper: URL\n- Point of Contact: walter.diaz_sanz@URL### Dataset Summary### Supported Tasks\n\nClassification masking of cancer in mammogram. \nThe dataset contains 100k synthetic mammograms with 3 labels: \n- \"Low masking level\" (score <= 2), \n- \"Medium masking level\" (2 < score <= 6),\n- \"High masking level\" (score > 6).## Dataset Structure\n\n- Images\n- CSAW-M Labels### Data Splits\n\nWe did not define data splits.## Dataset Creation\n\nWe generated the synthetic data samples using the diffusion model finetuned on the CSAW-M dataset.### Personal and Sensitive Information\n\nFollowing GDPR \"Personal data is any information that relates to an identified or identifiable living individual.\"\n\nWe make sure that there are not \"personal data\" (re-identifiable information) by filtering with a deep learning model trained for identifying patients.## Considerations for Using the Data### Social Impact of Dataset\n\nWe hope that this dataset can used to enhance AI models training for cancer masking.### Discussion of Biases\n\nThere are biases towards specific pathologies.## Additional Information### Dataset Curators"
] |
614bdfb1ceba0633cc9899897f8649cffa185a16
|
# Dataset Card for "fighter_jet_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
timestap/fighter_jet_captions
|
[
"region:us"
] |
2023-10-11T18:09:17+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4591975.0, "num_examples": 25}], "download_size": 4584088, "dataset_size": 4591975.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-11T18:09:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fighter_jet_captions"
More Information needed
|
[
"# Dataset Card for \"fighter_jet_captions\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fighter_jet_captions\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fighter_jet_captions\"\n\nMore Information needed"
] |
9e82c4d805aab108325cae87d89e1d3b6e7338e9
|
# Dataset Card for "platy_icl5_subset1.0_maxD50_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ostapeno/platy_icl5_prmt00_maxD50_3
|
[
"region:us"
] |
2023-10-11T18:09:21+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "docno", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "icl_examples", "sequence": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "author_instr", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "author_response", "dtype": "string"}, {"name": "normalized_cumul_logprob_response", "dtype": "float64"}], "splits": [{"name": "formal_logic", "num_bytes": 543627.0481283423, "num_examples": 108}, {"name": "machine_learning", "num_bytes": 583895.7183600713, "num_examples": 116}, {"name": "global_facts", "num_bytes": 734903.2317290553, "num_examples": 146}, {"name": "abstract_algebra", "num_bytes": 372485.19964349375, "num_examples": 74}, {"name": "high_school_physics", "num_bytes": 785239.0695187165, "num_examples": 156}, {"name": "college_biology", "num_bytes": 488257.6265597148, "num_examples": 97}, {"name": "high_school_government_and_politics", "num_bytes": 427854.6212121212, "num_examples": 85}, {"name": "prehistory", "num_bytes": 568794.9670231729, "num_examples": 113}, {"name": "security_studies", "num_bytes": 468123.29144385026, "num_examples": 93}, {"name": "sociology", "num_bytes": 674500.2263814617, "num_examples": 134}], "download_size": 2208464, "dataset_size": 5647681.0}}
|
2023-10-11T18:09:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "platy_icl5_subset1.0_maxD50_3"
More Information needed
|
[
"# Dataset Card for \"platy_icl5_subset1.0_maxD50_3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"platy_icl5_subset1.0_maxD50_3\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"platy_icl5_subset1.0_maxD50_3\"\n\nMore Information needed"
] |
26c510aad6260e13fee502e27143f394878dbd3d
|
# Dataset Card for "humansleepproject-rr-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
emi429/humansleepproject-rr-small
|
[
"region:us"
] |
2023-10-11T18:09:28+00:00
|
{"dataset_info": {"features": [{"name": "rr_intervals", "sequence": "float64"}, {"name": "sleep_stage", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 131445053, "num_examples": 56208}], "download_size": 21938826, "dataset_size": 131445053}}
|
2023-10-11T19:00:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "humansleepproject-rr-small"
More Information needed
|
[
"# Dataset Card for \"humansleepproject-rr-small\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"humansleepproject-rr-small\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"humansleepproject-rr-small\"\n\nMore Information needed"
] |
7ba40a6286f64894892ee1a8064e77b8c908497b
|
# Dataset Card for "Gender-by-Name"
This dataset attributes first names to genders, giving counts and probabilities. It combines open-source government data from the US, UK, Canada, and Australia. The dataset is taken from [UCI Machine Learning Repository](https://archive.ics.uci.edu/dataset/591/gender+by+name)
## Dataset Information
This dataset combines raw counts for first/given names of male and female babies in those time periods, and then calculates a probability for a name given the aggregate count. Source datasets are from government authorities:
-US: Baby Names from Social Security Card Applications - National Data, 1880 to 2019
-UK: Baby names in England and Wales Statistical bulletins, 2011 to 2018
-Canada: British Columbia 100 Years of Popular Baby names, 1918 to 2018
-Australia: Popular Baby Names, Attorney-General's Department, 1944 to 2019
## Has Missing Values?
No
## Variable Information
Name: String
Gender: 0/1 (female/male),
Count: Integer
Probability: Float
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
erickrribeiro/gender-by-name
|
[
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"language:pt",
"license:cc-by-4.0",
"gender_by_name",
"social_science",
"uci",
"region:us"
] |
2023-10-11T18:42:24+00:00
|
{"language": ["en", "pt"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "pretty_name": "Gender by Name", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Name", "dtype": "string"}, {"name": "Gender", "dtype": {"class_label": {"names": {"0": "F", "1": "M"}}}}, {"name": "Count", "dtype": "int64"}, {"name": "Probability", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 4090843.4554794286, "num_examples": 117815}, {"name": "test", "num_bytes": 1022719.5445205712, "num_examples": 29454}], "download_size": 2497614, "dataset_size": 5113563}, "tags": ["gender_by_name", "social_science", "uci"]}
|
2023-10-11T19:10:33+00:00
|
[] |
[
"en",
"pt"
] |
TAGS
#task_categories-text-classification #size_categories-100K<n<1M #language-English #language-Portuguese #license-cc-by-4.0 #gender_by_name #social_science #uci #region-us
|
# Dataset Card for "Gender-by-Name"
This dataset attributes first names to genders, giving counts and probabilities. It combines open-source government data from the US, UK, Canada, and Australia. The dataset is taken from UCI Machine Learning Repository
## Dataset Information
This dataset combines raw counts for first/given names of male and female babies in those time periods, and then calculates a probability for a name given the aggregate count. Source datasets are from government authorities:
-US: Baby Names from Social Security Card Applications - National Data, 1880 to 2019
-UK: Baby names in England and Wales Statistical bulletins, 2011 to 2018
-Canada: British Columbia 100 Years of Popular Baby names, 1918 to 2018
-Australia: Popular Baby Names, Attorney-General's Department, 1944 to 2019
## Has Missing Values?
No
## Variable Information
Name: String
Gender: 0/1 (female/male),
Count: Integer
Probability: Float
More Information needed
|
[
"# Dataset Card for \"Gender-by-Name\"\n\nThis dataset attributes first names to genders, giving counts and probabilities. It combines open-source government data from the US, UK, Canada, and Australia. The dataset is taken from UCI Machine Learning Repository",
"## Dataset Information\n\nThis dataset combines raw counts for first/given names of male and female babies in those time periods, and then calculates a probability for a name given the aggregate count. Source datasets are from government authorities:\n-US: Baby Names from Social Security Card Applications - National Data, 1880 to 2019\n-UK: Baby names in England and Wales Statistical bulletins, 2011 to 2018\n-Canada: British Columbia 100 Years of Popular Baby names, 1918 to 2018\n-Australia: Popular Baby Names, Attorney-General's Department, 1944 to 2019",
"## Has Missing Values?\nNo",
"## Variable Information\nName: String\t\nGender: 0/1 (female/male), \nCount: Integer\nProbability: Float\n\nMore Information needed"
] |
[
"TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-English #language-Portuguese #license-cc-by-4.0 #gender_by_name #social_science #uci #region-us \n",
"# Dataset Card for \"Gender-by-Name\"\n\nThis dataset attributes first names to genders, giving counts and probabilities. It combines open-source government data from the US, UK, Canada, and Australia. The dataset is taken from UCI Machine Learning Repository",
"## Dataset Information\n\nThis dataset combines raw counts for first/given names of male and female babies in those time periods, and then calculates a probability for a name given the aggregate count. Source datasets are from government authorities:\n-US: Baby Names from Social Security Card Applications - National Data, 1880 to 2019\n-UK: Baby names in England and Wales Statistical bulletins, 2011 to 2018\n-Canada: British Columbia 100 Years of Popular Baby names, 1918 to 2018\n-Australia: Popular Baby Names, Attorney-General's Department, 1944 to 2019",
"## Has Missing Values?\nNo",
"## Variable Information\nName: String\t\nGender: 0/1 (female/male), \nCount: Integer\nProbability: Float\n\nMore Information needed"
] |
[
61,
63,
129,
8,
31
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-English #language-Portuguese #license-cc-by-4.0 #gender_by_name #social_science #uci #region-us \n# Dataset Card for \"Gender-by-Name\"\n\nThis dataset attributes first names to genders, giving counts and probabilities. It combines open-source government data from the US, UK, Canada, and Australia. The dataset is taken from UCI Machine Learning Repository## Dataset Information\n\nThis dataset combines raw counts for first/given names of male and female babies in those time periods, and then calculates a probability for a name given the aggregate count. Source datasets are from government authorities:\n-US: Baby Names from Social Security Card Applications - National Data, 1880 to 2019\n-UK: Baby names in England and Wales Statistical bulletins, 2011 to 2018\n-Canada: British Columbia 100 Years of Popular Baby names, 1918 to 2018\n-Australia: Popular Baby Names, Attorney-General's Department, 1944 to 2019## Has Missing Values?\nNo## Variable Information\nName: String\t\nGender: 0/1 (female/male), \nCount: Integer\nProbability: Float\n\nMore Information needed"
] |
72be715be370b008d1860212f36e4fbb7590c644
|
# [WIP] Dataset Card for "nordjylland-news-summarization-subset"
*Please note that this dataset and dataset card both are works in progress. For now refer to the related [thesis](https://sorenmulli.github.io/thesis/thesis.pdf) for all details*
|
sorenmulli/nordjylland-news-summarization-subset
|
[
"region:us"
] |
2023-10-11T18:54:16+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "text_len", "dtype": "int64"}, {"name": "summary_len", "dtype": "int64"}, {"name": "ind", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 243846, "num_examples": 300}], "download_size": 162666, "dataset_size": 243846}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2024-01-15T19:36:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# [WIP] Dataset Card for "nordjylland-news-summarization-subset"
*Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*
|
[
"# [WIP] Dataset Card for \"nordjylland-news-summarization-subset\"\n\n*Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*"
] |
[
"TAGS\n#region-us \n",
"# [WIP] Dataset Card for \"nordjylland-news-summarization-subset\"\n\n*Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*"
] |
[
6,
51
] |
[
"passage: TAGS\n#region-us \n# [WIP] Dataset Card for \"nordjylland-news-summarization-subset\"\n\n*Please note that this dataset and dataset card both are works in progress. For now refer to the related thesis for all details*"
] |
ef9db53448844b0ded06e20b7454476cf3aec27c
|
This version of the dataset only has responses from GPT-4, Claude-1, Claude-2, Claude-instant-1, and GPT-3.5-turbo
|
Nebulous/lmsys-chat-1m-smortmodelsonly
|
[
"region:us"
] |
2023-10-11T19:07:57+00:00
|
{}
|
2023-10-11T20:07:55+00:00
|
[] |
[] |
TAGS
#region-us
|
This version of the dataset only has responses from GPT-4, Claude-1, Claude-2, Claude-instant-1, and GPT-3.5-turbo
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
cae48a2ef63efd7002e4d25dade9b9e356d04273
|
# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench11
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Undi95/Mistral-11B-TestBench11
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Undi95/Mistral-11B-TestBench11](https://huggingface.co/Undi95/Mistral-11B-TestBench11) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench11",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-28T01:59:23.177639](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench11/blob/main/results_2023-10-28T01-59-23.177639.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.02904781879194631,
"em_stderr": 0.0017198688690203193,
"f1": 0.09573615771812093,
"f1_stderr": 0.0021674728464020697,
"acc": 0.463391282649971,
"acc_stderr": 0.010754512266719978
},
"harness|drop|3": {
"em": 0.02904781879194631,
"em_stderr": 0.0017198688690203193,
"f1": 0.09573615771812093,
"f1_stderr": 0.0021674728464020697
},
"harness|gsm8k|5": {
"acc": 0.14935557240333586,
"acc_stderr": 0.00981809072372729
},
"harness|winogrande|5": {
"acc": 0.7774269928966061,
"acc_stderr": 0.011690933809712667
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench11
|
[
"region:us"
] |
2023-10-11T19:08:59+00:00
|
{"pretty_name": "Evaluation run of Undi95/Mistral-11B-TestBench11", "dataset_summary": "Dataset automatically created during the evaluation run of model [Undi95/Mistral-11B-TestBench11](https://huggingface.co/Undi95/Mistral-11B-TestBench11) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench11\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T01:59:23.177639](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench11/blob/main/results_2023-10-28T01-59-23.177639.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.02904781879194631,\n \"em_stderr\": 0.0017198688690203193,\n \"f1\": 0.09573615771812093,\n \"f1_stderr\": 0.0021674728464020697,\n \"acc\": 0.463391282649971,\n \"acc_stderr\": 0.010754512266719978\n },\n \"harness|drop|3\": {\n \"em\": 0.02904781879194631,\n \"em_stderr\": 0.0017198688690203193,\n \"f1\": 0.09573615771812093,\n \"f1_stderr\": 0.0021674728464020697\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.14935557240333586,\n \"acc_stderr\": 0.00981809072372729\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7774269928966061,\n \"acc_stderr\": 0.011690933809712667\n }\n}\n```", "repo_url": "https://huggingface.co/Undi95/Mistral-11B-TestBench11", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|arc:challenge|25_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T01_59_23.177639", "path": ["**/details_harness|drop|3_2023-10-28T01-59-23.177639.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T01-59-23.177639.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T01_59_23.177639", "path": ["**/details_harness|gsm8k|5_2023-10-28T01-59-23.177639.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T01-59-23.177639.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hellaswag|10_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T20-08-34.702863.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T20-08-34.702863.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T20-08-34.702863.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T01_59_23.177639", "path": ["**/details_harness|winogrande|5_2023-10-28T01-59-23.177639.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T01-59-23.177639.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T20_08_34.702863", "path": ["results_2023-10-11T20-08-34.702863.parquet"]}, {"split": "2023_10_28T01_59_23.177639", "path": ["results_2023-10-28T01-59-23.177639.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T01-59-23.177639.parquet"]}]}]}
|
2023-10-28T00:59:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench11
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench11 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-28T01:59:23.177639(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench11",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench11 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T01:59:23.177639(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench11",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench11 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T01:59:23.177639(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
22,
31,
170,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench11## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench11 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T01:59:23.177639(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
3c84399748bf31d2c5652160f52c724c44bffbb2
|
# Dataset Card for "my-NFT-text-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hongerzh/my-NFT-text-1
|
[
"region:us"
] |
2023-10-11T19:10:18+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5747231428.67, "num_examples": 29339}, {"name": "validation", "num_bytes": 1910360497.185, "num_examples": 9777}, {"name": "test", "num_bytes": 2129331391.38, "num_examples": 9780}], "download_size": 9022260166, "dataset_size": 9786923317.235}}
|
2023-10-11T20:25:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "my-NFT-text-1"
More Information needed
|
[
"# Dataset Card for \"my-NFT-text-1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"my-NFT-text-1\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"my-NFT-text-1\"\n\nMore Information needed"
] |
d5f88e19a8a0818bad4eabcf5794f8626f6f992b
|
## model_setting_name: platy
## max_context_length: 512
## subset: 1.0
## icl_examples: 2
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
|
ostapeno/platy_icl2_subset1.0_maxD50_3
|
[
"region:us"
] |
2023-10-11T19:26:35+00:00
|
{}
|
2023-10-11T19:26:48+00:00
|
[] |
[] |
TAGS
#region-us
|
## model_setting_name: platy
## max_context_length: 512
## subset: 1.0
## icl_examples: 2
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
|
[
"## model_setting_name: platy",
"## max_context_length: 512",
"## subset: 1.0",
"## icl_examples: 2",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10"
] |
[
"TAGS\n#region-us \n",
"## model_setting_name: platy",
"## max_context_length: 512",
"## subset: 1.0",
"## icl_examples: 2",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10"
] |
[
6,
9,
10,
5,
9,
14,
12,
12,
27,
7
] |
[
"passage: TAGS\n#region-us \n## model_setting_name: platy## max_context_length: 512## subset: 1.0## icl_examples: 2## icl_dataset_name: lukaemon/mmlu## max_documents_per_subject: 50## icl_use_out_options: True## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all## subjects: SUB_10"
] |
f3e6bfff5691989b654be48eafee6c7a7dae23bd
|
## model_setting_name: platy
## max_context_length: 512
## subset: 1.0
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
|
ostapeno/platy_icl5_subset1.0_maxD50_3
|
[
"region:us"
] |
2023-10-11T19:31:11+00:00
|
{}
|
2023-10-11T19:35:36+00:00
|
[] |
[] |
TAGS
#region-us
|
## model_setting_name: platy
## max_context_length: 512
## subset: 1.0
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
|
[
"## model_setting_name: platy",
"## max_context_length: 512",
"## subset: 1.0",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10"
] |
[
"TAGS\n#region-us \n",
"## model_setting_name: platy",
"## max_context_length: 512",
"## subset: 1.0",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10"
] |
[
6,
9,
10,
5,
9,
14,
12,
12,
27,
7
] |
[
"passage: TAGS\n#region-us \n## model_setting_name: platy## max_context_length: 512## subset: 1.0## icl_examples: 5## icl_dataset_name: lukaemon/mmlu## max_documents_per_subject: 50## icl_use_out_options: True## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all## subjects: SUB_10"
] |
49e82f8613323bc234e2d790a947853a66b8051f
|
# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench10
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Undi95/Mistral-11B-TestBench10
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Undi95/Mistral-11B-TestBench10](https://huggingface.co/Undi95/Mistral-11B-TestBench10) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench10",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-11T20:32:37.017457](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench10/blob/main/results_2023-10-11T20-32-37.017457.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6389303587566675,
"acc_stderr": 0.033080650268054235,
"acc_norm": 0.6424809348544399,
"acc_norm_stderr": 0.033059008030264514,
"mc1": 0.39412484700122397,
"mc1_stderr": 0.017106588140700322,
"mc2": 0.5556543619352063,
"mc2_stderr": 0.015507002997196854
},
"harness|arc:challenge|25": {
"acc": 0.6220136518771331,
"acc_stderr": 0.014169664520303098,
"acc_norm": 0.6424914675767918,
"acc_norm_stderr": 0.014005494275916576
},
"harness|hellaswag|10": {
"acc": 0.6533559051981677,
"acc_stderr": 0.004749286071559565,
"acc_norm": 0.8423620792670783,
"acc_norm_stderr": 0.003636564286352674
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6222222222222222,
"acc_stderr": 0.04188307537595852,
"acc_norm": 0.6222222222222222,
"acc_norm_stderr": 0.04188307537595852
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6710526315789473,
"acc_stderr": 0.038234289699266046,
"acc_norm": 0.6710526315789473,
"acc_norm_stderr": 0.038234289699266046
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7056603773584905,
"acc_stderr": 0.02804918631569525,
"acc_norm": 0.7056603773584905,
"acc_norm_stderr": 0.02804918631569525
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.037455547914624555,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.037455547914624555
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736412,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736412
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.048971049527263666,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.048971049527263666
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5446808510638298,
"acc_stderr": 0.03255525359340354,
"acc_norm": 0.5446808510638298,
"acc_norm_stderr": 0.03255525359340354
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3941798941798942,
"acc_stderr": 0.02516798233389414,
"acc_norm": 0.3941798941798942,
"acc_norm_stderr": 0.02516798233389414
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04426266681379909,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04426266681379909
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7451612903225806,
"acc_stderr": 0.024790118459332208,
"acc_norm": 0.7451612903225806,
"acc_norm_stderr": 0.024790118459332208
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.035176035403610084,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.035176035403610084
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009182,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009182
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.028606204289229872,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.028606204289229872
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8860103626943006,
"acc_stderr": 0.022935144053919443,
"acc_norm": 0.8860103626943006,
"acc_norm_stderr": 0.022935144053919443
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6820512820512821,
"acc_stderr": 0.02361088430892786,
"acc_norm": 0.6820512820512821,
"acc_norm_stderr": 0.02361088430892786
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.029381620726465073,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.029381620726465073
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6512605042016807,
"acc_stderr": 0.030956636328566548,
"acc_norm": 0.6512605042016807,
"acc_norm_stderr": 0.030956636328566548
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.038020397601079024,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.038020397601079024
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8348623853211009,
"acc_stderr": 0.015919557829976044,
"acc_norm": 0.8348623853211009,
"acc_norm_stderr": 0.015919557829976044
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5416666666666666,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.5416666666666666,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8137254901960784,
"acc_stderr": 0.02732547096671631,
"acc_norm": 0.8137254901960784,
"acc_norm_stderr": 0.02732547096671631
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7890295358649789,
"acc_stderr": 0.02655837250266192,
"acc_norm": 0.7890295358649789,
"acc_norm_stderr": 0.02655837250266192
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6681614349775785,
"acc_stderr": 0.03160295143776679,
"acc_norm": 0.6681614349775785,
"acc_norm_stderr": 0.03160295143776679
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7852760736196319,
"acc_stderr": 0.032262193772867744,
"acc_norm": 0.7852760736196319,
"acc_norm_stderr": 0.032262193772867744
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5,
"acc_stderr": 0.04745789978762494,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04745789978762494
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.039891398595317706,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.039891398595317706
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.021901905115073325,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.021901905115073325
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8122605363984674,
"acc_stderr": 0.013964393769899133,
"acc_norm": 0.8122605363984674,
"acc_norm_stderr": 0.013964393769899133
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.708092485549133,
"acc_stderr": 0.024476994076247337,
"acc_norm": 0.708092485549133,
"acc_norm_stderr": 0.024476994076247337
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.40782122905027934,
"acc_stderr": 0.016435865260914742,
"acc_norm": 0.40782122905027934,
"acc_norm_stderr": 0.016435865260914742
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7124183006535948,
"acc_stderr": 0.02591780611714716,
"acc_norm": 0.7124183006535948,
"acc_norm_stderr": 0.02591780611714716
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6977491961414791,
"acc_stderr": 0.02608270069539966,
"acc_norm": 0.6977491961414791,
"acc_norm_stderr": 0.02608270069539966
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7098765432098766,
"acc_stderr": 0.025251173936495026,
"acc_norm": 0.7098765432098766,
"acc_norm_stderr": 0.025251173936495026
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46099290780141844,
"acc_stderr": 0.029736592526424438,
"acc_norm": 0.46099290780141844,
"acc_norm_stderr": 0.029736592526424438
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4498044328552803,
"acc_stderr": 0.01270572149856511,
"acc_norm": 0.4498044328552803,
"acc_norm_stderr": 0.01270572149856511
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.028245687391462923,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.028245687391462923
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6633986928104575,
"acc_stderr": 0.019117213911495144,
"acc_norm": 0.6633986928104575,
"acc_norm_stderr": 0.019117213911495144
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.028263889943784603,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.028263889943784603
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8557213930348259,
"acc_stderr": 0.024845753212306053,
"acc_norm": 0.8557213930348259,
"acc_norm_stderr": 0.024845753212306053
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5421686746987951,
"acc_stderr": 0.0387862677100236,
"acc_norm": 0.5421686746987951,
"acc_norm_stderr": 0.0387862677100236
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.39412484700122397,
"mc1_stderr": 0.017106588140700322,
"mc2": 0.5556543619352063,
"mc2_stderr": 0.015507002997196854
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench10
|
[
"region:us"
] |
2023-10-11T19:33:00+00:00
|
{"pretty_name": "Evaluation run of Undi95/Mistral-11B-TestBench10", "dataset_summary": "Dataset automatically created during the evaluation run of model [Undi95/Mistral-11B-TestBench10](https://huggingface.co/Undi95/Mistral-11B-TestBench10) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench10\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-11T20:32:37.017457](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Mistral-11B-TestBench10/blob/main/results_2023-10-11T20-32-37.017457.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6389303587566675,\n \"acc_stderr\": 0.033080650268054235,\n \"acc_norm\": 0.6424809348544399,\n \"acc_norm_stderr\": 0.033059008030264514,\n \"mc1\": 0.39412484700122397,\n \"mc1_stderr\": 0.017106588140700322,\n \"mc2\": 0.5556543619352063,\n \"mc2_stderr\": 0.015507002997196854\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6220136518771331,\n \"acc_stderr\": 0.014169664520303098,\n \"acc_norm\": 0.6424914675767918,\n \"acc_norm_stderr\": 0.014005494275916576\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6533559051981677,\n \"acc_stderr\": 0.004749286071559565,\n \"acc_norm\": 0.8423620792670783,\n \"acc_norm_stderr\": 0.003636564286352674\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n \"acc_stderr\": 0.04188307537595852,\n \"acc_norm\": 0.6222222222222222,\n \"acc_norm_stderr\": 0.04188307537595852\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.038234289699266046,\n \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.038234289699266046\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.02804918631569525,\n \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.02804918631569525\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.037455547914624555,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.037455547914624555\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5446808510638298,\n \"acc_stderr\": 0.03255525359340354,\n \"acc_norm\": 0.5446808510638298,\n \"acc_norm_stderr\": 0.03255525359340354\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3941798941798942,\n \"acc_stderr\": 0.02516798233389414,\n \"acc_norm\": 0.3941798941798942,\n \"acc_norm_stderr\": 0.02516798233389414\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.04426266681379909,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.04426266681379909\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7451612903225806,\n \"acc_stderr\": 0.024790118459332208,\n \"acc_norm\": 0.7451612903225806,\n \"acc_norm_stderr\": 0.024790118459332208\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.035176035403610084,\n \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.035176035403610084\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009182,\n \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009182\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.797979797979798,\n \"acc_stderr\": 0.028606204289229872,\n \"acc_norm\": 0.797979797979798,\n \"acc_norm_stderr\": 0.028606204289229872\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919443,\n \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919443\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6820512820512821,\n \"acc_stderr\": 0.02361088430892786,\n \"acc_norm\": 0.6820512820512821,\n \"acc_norm_stderr\": 0.02361088430892786\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.36666666666666664,\n \"acc_stderr\": 0.029381620726465073,\n \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.029381620726465073\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6512605042016807,\n \"acc_stderr\": 0.030956636328566548,\n \"acc_norm\": 0.6512605042016807,\n \"acc_norm_stderr\": 0.030956636328566548\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31788079470198677,\n \"acc_stderr\": 0.038020397601079024,\n \"acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.038020397601079024\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8348623853211009,\n \"acc_stderr\": 0.015919557829976044,\n \"acc_norm\": 0.8348623853211009,\n \"acc_norm_stderr\": 0.015919557829976044\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5416666666666666,\n \"acc_stderr\": 0.03398110890294636,\n \"acc_norm\": 0.5416666666666666,\n \"acc_norm_stderr\": 0.03398110890294636\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8137254901960784,\n \"acc_stderr\": 0.02732547096671631,\n \"acc_norm\": 0.8137254901960784,\n \"acc_norm_stderr\": 0.02732547096671631\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7890295358649789,\n \"acc_stderr\": 0.02655837250266192,\n \"acc_norm\": 0.7890295358649789,\n \"acc_norm_stderr\": 0.02655837250266192\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n \"acc_stderr\": 0.03160295143776679,\n \"acc_norm\": 0.6681614349775785,\n \"acc_norm_stderr\": 0.03160295143776679\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7852760736196319,\n \"acc_stderr\": 0.032262193772867744,\n \"acc_norm\": 0.7852760736196319,\n \"acc_norm_stderr\": 0.032262193772867744\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n \"acc_stderr\": 0.021901905115073325,\n \"acc_norm\": 0.8717948717948718,\n \"acc_norm_stderr\": 0.021901905115073325\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8122605363984674,\n \"acc_stderr\": 0.013964393769899133,\n \"acc_norm\": 0.8122605363984674,\n \"acc_norm_stderr\": 0.013964393769899133\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.708092485549133,\n \"acc_stderr\": 0.024476994076247337,\n \"acc_norm\": 0.708092485549133,\n \"acc_norm_stderr\": 0.024476994076247337\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.40782122905027934,\n \"acc_stderr\": 0.016435865260914742,\n \"acc_norm\": 0.40782122905027934,\n \"acc_norm_stderr\": 0.016435865260914742\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7124183006535948,\n \"acc_stderr\": 0.02591780611714716,\n \"acc_norm\": 0.7124183006535948,\n \"acc_norm_stderr\": 0.02591780611714716\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6977491961414791,\n \"acc_stderr\": 0.02608270069539966,\n \"acc_norm\": 0.6977491961414791,\n \"acc_norm_stderr\": 0.02608270069539966\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7098765432098766,\n \"acc_stderr\": 0.025251173936495026,\n \"acc_norm\": 0.7098765432098766,\n \"acc_norm_stderr\": 0.025251173936495026\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46099290780141844,\n \"acc_stderr\": 0.029736592526424438,\n \"acc_norm\": 0.46099290780141844,\n \"acc_norm_stderr\": 0.029736592526424438\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4498044328552803,\n \"acc_stderr\": 0.01270572149856511,\n \"acc_norm\": 0.4498044328552803,\n \"acc_norm_stderr\": 0.01270572149856511\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.028245687391462923,\n \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.028245687391462923\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6633986928104575,\n \"acc_stderr\": 0.019117213911495144,\n \"acc_norm\": 0.6633986928104575,\n \"acc_norm_stderr\": 0.019117213911495144\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784603,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784603\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8557213930348259,\n \"acc_stderr\": 0.024845753212306053,\n \"acc_norm\": 0.8557213930348259,\n \"acc_norm_stderr\": 0.024845753212306053\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.39412484700122397,\n \"mc1_stderr\": 0.017106588140700322,\n \"mc2\": 0.5556543619352063,\n \"mc2_stderr\": 0.015507002997196854\n }\n}\n```", "repo_url": "https://huggingface.co/Undi95/Mistral-11B-TestBench10", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|arc:challenge|25_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hellaswag|10_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T20-32-37.017457.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T20-32-37.017457.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T20_32_37.017457", "path": ["results_2023-10-11T20-32-37.017457.parquet"]}, {"split": "latest", "path": ["results_2023-10-11T20-32-37.017457.parquet"]}]}]}
|
2023-10-11T19:34:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench10
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench10 on the Open LLM Leaderboard.
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-11T20:32:37.017457(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench10",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench10 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-11T20:32:37.017457(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench10",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench10 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-11T20:32:37.017457(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
22,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Undi95/Mistral-11B-TestBench10## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Undi95/Mistral-11B-TestBench10 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-11T20:32:37.017457(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
76b78d97bbb5a1449fbe9496fdad3f6b4cbf38c1
|
# Dataset Card for "fm_queries_classifier"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
coastalcph/fm_queries_classifier
|
[
"region:us"
] |
2023-10-11T19:37:31+00:00
|
{"dataset_info": {"features": [{"name": "query", "dtype": "string"}, {"name": "answer", "list": [{"name": "wikidata_id", "dtype": "string"}, {"name": "name", "dtype": "string"}]}, {"name": "id", "dtype": "string"}, {"name": "relation", "dtype": "string"}, {"name": "date", "dtype": "int64"}, {"name": "type", "dtype": "string"}, {"name": "is_mutable", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1437936, "num_examples": 8974}, {"name": "all_fm", "num_bytes": 33337568, "num_examples": 192165}, {"name": "validation", "num_bytes": 960721, "num_examples": 5793}, {"name": "test", "num_bytes": 1026699, "num_examples": 5698}], "download_size": 1260361, "dataset_size": 36762924}}
|
2023-10-18T12:36:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fm_queries_classifier"
More Information needed
|
[
"# Dataset Card for \"fm_queries_classifier\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fm_queries_classifier\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fm_queries_classifier\"\n\nMore Information needed"
] |
e28b9bad2f64f688ce979600f41bf34ab611d1a5
|
# Dataset Card for "itt_specdata"
## 1. Dataset Description
Dataset is used for the following project
- **Homepage:** [Trad-fusion](https://github.com/hdparmar/Tradi-fusion)
### 1.1 Dataset Summary
This dataset contains mel spectrograms that represent traditional Irish tunes. Each spectrogram image is of the dimensions 512x512 and includes 1 channel.
This 1 channel, you can use to fine-tune or train different models, example: Diffusion Model but since diffusion model takes 3 channel, I have other dataset irish-traditional-tunes for that purpose.
This 1 channel, gives a way to experiment and add creativity to other 2 channels, for example, 2nd channel can be delta, and 3rd can be delta-delta of the 1st channel mel-spectrogram.
The primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.
### 1.2 Languages
The dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.
## 2. Dataset Structure
### 2.1 Data Instances
Each data instance in this dataset is composed of two main elements: an image and a text caption.
The image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.
#### Example:
The metadata.csv file the dataset is in this format
```
{"file_name": "path/to/the/image.png",
"text": "Irish Traditional Tune"}
```
### 2.2 Data Fields
- **file_name**: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.
- **text**: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is "Irish Traditional Tune."
### 2.3 Data Splits
As of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.
### 2.4 Uniform Captions: A Special Note
All the spectrograms in this dataset come labeled with a uniform caption: "Irish Traditional Tune." This consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.
## NOTE
Furthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset
|
hdparmar/itt_specdata
|
[
"task_categories:text-to-image",
"task_categories:text-to-audio",
"license:apache-2.0",
"region:us"
] |
2023-10-11T20:00:34+00:00
|
{"license": "apache-2.0", "task_categories": ["text-to-image", "text-to-audio"], "pretty_name": "Data Irish Traditional Tunes (Spectrogram-Text)", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6043862350.049, "num_examples": 51217}], "download_size": 6011357718, "dataset_size": 6043862350.049}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-15T01:12:42+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #task_categories-text-to-audio #license-apache-2.0 #region-us
|
# Dataset Card for "itt_specdata"
## 1. Dataset Description
Dataset is used for the following project
- Homepage: Trad-fusion
### 1.1 Dataset Summary
This dataset contains mel spectrograms that represent traditional Irish tunes. Each spectrogram image is of the dimensions 512x512 and includes 1 channel.
This 1 channel, you can use to fine-tune or train different models, example: Diffusion Model but since diffusion model takes 3 channel, I have other dataset irish-traditional-tunes for that purpose.
This 1 channel, gives a way to experiment and add creativity to other 2 channels, for example, 2nd channel can be delta, and 3rd can be delta-delta of the 1st channel mel-spectrogram.
The primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.
### 1.2 Languages
The dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.
## 2. Dataset Structure
### 2.1 Data Instances
Each data instance in this dataset is composed of two main elements: an image and a text caption.
The image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.
#### Example:
The URL file the dataset is in this format
### 2.2 Data Fields
- file_name: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.
- text: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is "Irish Traditional Tune."
### 2.3 Data Splits
As of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.
### 2.4 Uniform Captions: A Special Note
All the spectrograms in this dataset come labeled with a uniform caption: "Irish Traditional Tune." This consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.
## NOTE
Furthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset
|
[
"# Dataset Card for \"itt_specdata\"",
"## 1. Dataset Description\n Dataset is used for the following project\n- Homepage: Trad-fusion",
"### 1.1 Dataset Summary\nThis dataset contains mel spectrograms that represent traditional Irish tunes. Each spectrogram image is of the dimensions 512x512 and includes 1 channel. \nThis 1 channel, you can use to fine-tune or train different models, example: Diffusion Model but since diffusion model takes 3 channel, I have other dataset irish-traditional-tunes for that purpose.\nThis 1 channel, gives a way to experiment and add creativity to other 2 channels, for example, 2nd channel can be delta, and 3rd can be delta-delta of the 1st channel mel-spectrogram.\nThe primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.",
"### 1.2 Languages\nThe dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.",
"## 2. Dataset Structure",
"### 2.1 Data Instances\nEach data instance in this dataset is composed of two main elements: an image and a text caption. \nThe image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.",
"#### Example:\nThe URL file the dataset is in this format",
"### 2.2 Data Fields\n- file_name: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.\n- text: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is \"Irish Traditional Tune.\"",
"### 2.3 Data Splits\nAs of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.",
"### 2.4 Uniform Captions: A Special Note\nAll the spectrograms in this dataset come labeled with a uniform caption: \"Irish Traditional Tune.\" This consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.",
"## NOTE\nFurthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset"
] |
[
"TAGS\n#task_categories-text-to-image #task_categories-text-to-audio #license-apache-2.0 #region-us \n",
"# Dataset Card for \"itt_specdata\"",
"## 1. Dataset Description\n Dataset is used for the following project\n- Homepage: Trad-fusion",
"### 1.1 Dataset Summary\nThis dataset contains mel spectrograms that represent traditional Irish tunes. Each spectrogram image is of the dimensions 512x512 and includes 1 channel. \nThis 1 channel, you can use to fine-tune or train different models, example: Diffusion Model but since diffusion model takes 3 channel, I have other dataset irish-traditional-tunes for that purpose.\nThis 1 channel, gives a way to experiment and add creativity to other 2 channels, for example, 2nd channel can be delta, and 3rd can be delta-delta of the 1st channel mel-spectrogram.\nThe primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.",
"### 1.2 Languages\nThe dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.",
"## 2. Dataset Structure",
"### 2.1 Data Instances\nEach data instance in this dataset is composed of two main elements: an image and a text caption. \nThe image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.",
"#### Example:\nThe URL file the dataset is in this format",
"### 2.2 Data Fields\n- file_name: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.\n- text: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is \"Irish Traditional Tune.\"",
"### 2.3 Data Splits\nAs of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.",
"### 2.4 Uniform Captions: A Special Note\nAll the spectrograms in this dataset come labeled with a uniform caption: \"Irish Traditional Tune.\" This consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.",
"## NOTE\nFurthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset"
] |
[
39,
12,
20,
175,
30,
7,
65,
15,
80,
49,
76,
34
] |
[
"passage: TAGS\n#task_categories-text-to-image #task_categories-text-to-audio #license-apache-2.0 #region-us \n# Dataset Card for \"itt_specdata\"## 1. Dataset Description\n Dataset is used for the following project\n- Homepage: Trad-fusion### 1.1 Dataset Summary\nThis dataset contains mel spectrograms that represent traditional Irish tunes. Each spectrogram image is of the dimensions 512x512 and includes 1 channel. \nThis 1 channel, you can use to fine-tune or train different models, example: Diffusion Model but since diffusion model takes 3 channel, I have other dataset irish-traditional-tunes for that purpose.\nThis 1 channel, gives a way to experiment and add creativity to other 2 channels, for example, 2nd channel can be delta, and 3rd can be delta-delta of the 1st channel mel-spectrogram.\nThe primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.### 1.2 Languages\nThe dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.## 2. Dataset Structure### 2.1 Data Instances\nEach data instance in this dataset is composed of two main elements: an image and a text caption. \nThe image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.#### Example:\nThe URL file the dataset is in this format### 2.2 Data Fields\n- file_name: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.\n- text: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is \"Irish Traditional Tune.\"### 2.3 Data Splits\nAs of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset."
] |
2f2ce96a74d09e31fb7e81e57c1057d236a7c0d8
|
# Dataset Card for "raw_dataset_with_embeddings_bert-base-portuguese-cased-nli-assin-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
iara-project/raw_dataset_with_embeddings_bert-base-portuguese-cased-nli-assin-2
|
[
"region:us"
] |
2023-10-11T20:22:19+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "news_id", "dtype": "string"}, {"name": "embeddings", "sequence": "float64"}, {"name": "sentence", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1672222472, "num_examples": 176114}, {"name": "test", "num_bytes": 1670470539, "num_examples": 176114}], "download_size": 2474408751, "dataset_size": 3342693011}}
|
2023-10-11T20:24:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "raw_dataset_with_embeddings_bert-base-portuguese-cased-nli-assin-2"
More Information needed
|
[
"# Dataset Card for \"raw_dataset_with_embeddings_bert-base-portuguese-cased-nli-assin-2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"raw_dataset_with_embeddings_bert-base-portuguese-cased-nli-assin-2\"\n\nMore Information needed"
] |
[
6,
38
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"raw_dataset_with_embeddings_bert-base-portuguese-cased-nli-assin-2\"\n\nMore Information needed"
] |
5a58ed7a0cae28a56ca48089fd222ff4159582ff
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 1
## inverse_template: 0
|
ostapeno/platy_icl5_maxD50_maxC1000000_prmt10_3
|
[
"region:us"
] |
2023-10-11T21:09:26+00:00
|
{}
|
2023-10-11T21:09:38+00:00
|
[] |
[] |
TAGS
#region-us
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 50
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 1
## inverse_template: 0
|
[
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 1",
"## inverse_template: 0"
] |
[
"TAGS\n#region-us \n",
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 50",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 1",
"## inverse_template: 0"
] |
[
6,
9,
10,
9,
14,
12,
14,
12,
27,
7,
7,
8
] |
[
"passage: TAGS\n#region-us \n## model_setting_name: platy## max_context_length: 512## icl_examples: 5## icl_dataset_name: lukaemon/mmlu## max_documents_per_subject: 50## max_contexts_per_subject: 1000000## icl_use_out_options: True## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all## subjects: SUB_10## response_template: 1## inverse_template: 0"
] |
f6262c7138abd8e9ed516e4a3af5927e5edd9205
|
# Dataset Card for Evaluation run of unaidedelf87777/wizard-mistral-v0.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/unaidedelf87777/wizard-mistral-v0.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [unaidedelf87777/wizard-mistral-v0.1](https://huggingface.co/unaidedelf87777/wizard-mistral-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_unaidedelf87777__wizard-mistral-v0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T00:26:05.989697](https://huggingface.co/datasets/open-llm-leaderboard/details_unaidedelf87777__wizard-mistral-v0.1/blob/main/results_2023-10-24T00-26-05.989697.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.005662751677852349,
"em_stderr": 0.0007684582267637443,
"f1": 0.07014261744966427,
"f1_stderr": 0.0015546181894855703,
"acc": 0.4866237666597055,
"acc_stderr": 0.011199109496696186
},
"harness|drop|3": {
"em": 0.005662751677852349,
"em_stderr": 0.0007684582267637443,
"f1": 0.07014261744966427,
"f1_stderr": 0.0015546181894855703
},
"harness|gsm8k|5": {
"acc": 0.19029567854435178,
"acc_stderr": 0.010812347283182963
},
"harness|winogrande|5": {
"acc": 0.7829518547750592,
"acc_stderr": 0.01158587171020941
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_unaidedelf87777__wizard-mistral-v0.1
|
[
"region:us"
] |
2023-10-11T21:56:23+00:00
|
{"pretty_name": "Evaluation run of unaidedelf87777/wizard-mistral-v0.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [unaidedelf87777/wizard-mistral-v0.1](https://huggingface.co/unaidedelf87777/wizard-mistral-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_unaidedelf87777__wizard-mistral-v0.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T00:26:05.989697](https://huggingface.co/datasets/open-llm-leaderboard/details_unaidedelf87777__wizard-mistral-v0.1/blob/main/results_2023-10-24T00-26-05.989697.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.005662751677852349,\n \"em_stderr\": 0.0007684582267637443,\n \"f1\": 0.07014261744966427,\n \"f1_stderr\": 0.0015546181894855703,\n \"acc\": 0.4866237666597055,\n \"acc_stderr\": 0.011199109496696186\n },\n \"harness|drop|3\": {\n \"em\": 0.005662751677852349,\n \"em_stderr\": 0.0007684582267637443,\n \"f1\": 0.07014261744966427,\n \"f1_stderr\": 0.0015546181894855703\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.19029567854435178,\n \"acc_stderr\": 0.010812347283182963\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7829518547750592,\n \"acc_stderr\": 0.01158587171020941\n }\n}\n```", "repo_url": "https://huggingface.co/unaidedelf87777/wizard-mistral-v0.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|arc:challenge|25_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T00_26_05.989697", "path": ["**/details_harness|drop|3_2023-10-24T00-26-05.989697.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T00-26-05.989697.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T00_26_05.989697", "path": ["**/details_harness|gsm8k|5_2023-10-24T00-26-05.989697.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T00-26-05.989697.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hellaswag|10_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-11T22-55-59.459837.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T22-55-59.459837.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-11T22-55-59.459837.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T00_26_05.989697", "path": ["**/details_harness|winogrande|5_2023-10-24T00-26-05.989697.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T00-26-05.989697.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_11T22_55_59.459837", "path": ["results_2023-10-11T22-55-59.459837.parquet"]}, {"split": "2023_10_24T00_26_05.989697", "path": ["results_2023-10-24T00-26-05.989697.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T00-26-05.989697.parquet"]}]}]}
|
2023-10-23T23:26:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of unaidedelf87777/wizard-mistral-v0.1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model unaidedelf87777/wizard-mistral-v0.1 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-24T00:26:05.989697(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of unaidedelf87777/wizard-mistral-v0.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model unaidedelf87777/wizard-mistral-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T00:26:05.989697(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of unaidedelf87777/wizard-mistral-v0.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model unaidedelf87777/wizard-mistral-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T00:26:05.989697(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
24,
31,
172,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of unaidedelf87777/wizard-mistral-v0.1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model unaidedelf87777/wizard-mistral-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T00:26:05.989697(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
79e1f38c03404acd438a8eb46c3c1d70160ab6c5
|
# Dataset Card for "role_play_chat_v30"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
RickBigL/role_play_chat_v30
|
[
"region:us"
] |
2023-10-11T22:13:30+00:00
|
{"dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 341109069, "num_examples": 45651}], "download_size": 68658800, "dataset_size": 341109069}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-11T22:44:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "role_play_chat_v30"
More Information needed
|
[
"# Dataset Card for \"role_play_chat_v30\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"role_play_chat_v30\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"role_play_chat_v30\"\n\nMore Information needed"
] |
ef5a7e815853735d9a67657d1cec04893821041c
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 10
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_1
|
ostapeno/platy_icl5_maxD10_maxC1000000_0
|
[
"region:us"
] |
2023-10-11T22:25:31+00:00
|
{}
|
2023-10-11T22:44:41+00:00
|
[] |
[] |
TAGS
#region-us
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 10
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_1
|
[
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 10",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_1"
] |
[
"TAGS\n#region-us \n",
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 10",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_1"
] |
[
6,
9,
10,
9,
14,
12,
14,
12,
27,
6
] |
[
"passage: TAGS\n#region-us \n## model_setting_name: platy## max_context_length: 512## icl_examples: 5## icl_dataset_name: lukaemon/mmlu## max_documents_per_subject: 10## max_contexts_per_subject: 1000000## icl_use_out_options: True## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all## subjects: SUB_1"
] |
901a9273a770e9d4138c5ddd91802f9c5c6cdc4b
|
<img src="proofpile_logo.jpg" width="500">
[ArXiv](http://arxiv.org/abs/2310.10631) | [Models](https://huggingface.co/EleutherAI/llemma_34b) | [Data](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | [Code](https://github.com/EleutherAI/math-lm) | [Blog](https://blog.eleuther.ai/llemma/) | [Sample Explorer](https://llemma-demo.github.io/)
[Zhangir Azerbayev](https://zhangir-azerbayev.github.io/), [Hailey Schoelkopf](https://github.com/haileyschoelkopf), [Keiran Paster](https://keirp.com), [Marco Dos Santos](https://github.com/dsantosmarco), [Stephen McAleer](https://www.andrew.cmu.edu/user/smcaleer/), [Albert Q. Jiang](https://albertqjiang.github.io/), [Jia Deng](https://www.cs.princeton.edu/~jiadeng/), [Stella Biderman](https://www.stellabiderman.com/), [Sean Welleck](https://wellecks.com/)
The **Proof-Pile-2** is a 55 billion token dataset of mathematical and scientific documents. This dataset was created in order to train the [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b) and [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b) models. It consists of three subsets:
- `arxiv` (29B tokens): the ArXiv subset of [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
- `open-web-math` (15B tokens): The [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) dataset, which contains much of the high-quality mathematical text from the internet.
- `algebraic-stack` (11B tokens): A new dataset of mathematical code, including numerical computing, computer algebra, and formal mathematics.
You can download the dataset as follows
```python
from datasets import load_dataset
ds = load_dataset("EleutherAI/proof-pile-2")
# To load only a specific subset, pass it as an argument, e.g
ds_arxiv = load_dataset("EleutherAI/proof-pile-2", "arxiv")
```
### Schema
Each dataset row has the following structure
```python
{
"text": ..., # document text
"meta": ..., # JSON string of metadata, schema specific to data source
}
```
### Dataset Contents
For detailed documentation of the ArXiv and web subsets, refer to [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) and [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math). The following table enumerates the contents of the AlgebraicStack by programming language. The AlgebraicStack is filtered to only include documents that contain mathematics, as judged by hand-crafted, language-specific heuristics.
| Language | AlgebraicStack tokens |
|-----------|-----------------------|
| Agda | 35.2 M |
| C | 25.1 M |
| C++ | 954.1 M |
| Coq | 281.9 M |
| Fortran | 724.9 M |
| GAP | 3.6 M |
| Haskell | 9.1 M |
| Idris | 10.9 M |
| Isabelle | 1,089.7 M |
| Julia | 531.0 M |
| Jupyter | 199.1 M |
| Lean | 285.6 M |
| Maple | 2.0 M |
| Matlab | 65.8 M |
| Python | 6,098.8 M |
| R | 71.3 M |
| Tex | 567.7 M |
| **Total** | **10,955.7 M** |
### License
We do not alter the license of any of the underlying data.
### Version History
**v1.1.0**: Contains an updated version of OpenWebMath, precisely the one available at [open-web-math/open-web-math](https://huggingface.co/datasets/open-web-math/open-web-math). This version of OpenWebMath has slightly improved filtering, for example, removal of very short documents.
**v1.0.0**: The data used to train the [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b) and [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b). Uses a development version of OpenWebMath.
### Citation
For the entire Proof-Pile-2, cite
```
@misc{azerbayev2023llemma,
title={Llemma: An Open Language Model For Mathematics},
author={Zhangir Azerbayev and Hailey Schoelkopf and Keiran Paster and Marco Dos Santos and Stephen McAleer and Albert Q. Jiang and Jia Deng and Stella Biderman and Sean Welleck},
year={2023},
eprint={2310.10631},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
For the ArXiv subset, cite
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
For OpenWebMath, cite
```
@misc{paster2023openwebmath,
title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text},
author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba},
year={2023},
eprint={2310.06786},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
|
EleutherAI/proof-pile-2
|
[
"task_categories:text-generation",
"size_categories:10B<n<100B",
"language:en",
"math",
"arxiv:2310.10631",
"arxiv:2310.06786",
"region:us"
] |
2023-10-11T23:11:33+00:00
|
{"language": ["en"], "size_categories": ["10B<n<100B"], "task_categories": ["text-generation"], "tags": ["math"]}
|
2023-10-25T05:16:04+00:00
|
[
"2310.10631",
"2310.06786"
] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-10B<n<100B #language-English #math #arxiv-2310.10631 #arxiv-2310.06786 #region-us
|

ArXiv | Models | Data | Code | Blog | Sample Explorer
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, Sean Welleck
The Proof-Pile-2 is a 55 billion token dataset of mathematical and scientific documents. This dataset was created in order to train the Llemma 7B and Llemma 34B models. It consists of three subsets:
* 'arxiv' (29B tokens): the ArXiv subset of RedPajama
* 'open-web-math' (15B tokens): The OpenWebMath dataset, which contains much of the high-quality mathematical text from the internet.
* 'algebraic-stack' (11B tokens): A new dataset of mathematical code, including numerical computing, computer algebra, and formal mathematics.
You can download the dataset as follows
### Schema
Each dataset row has the following structure
### Dataset Contents
For detailed documentation of the ArXiv and web subsets, refer to RedPajama and OpenWebMath. The following table enumerates the contents of the AlgebraicStack by programming language. The AlgebraicStack is filtered to only include documents that contain mathematics, as judged by hand-crafted, language-specific heuristics.
### License
We do not alter the license of any of the underlying data.
### Version History
v1.1.0: Contains an updated version of OpenWebMath, precisely the one available at open-web-math/open-web-math. This version of OpenWebMath has slightly improved filtering, for example, removal of very short documents.
v1.0.0: The data used to train the Llemma 7B and Llemma 34B. Uses a development version of OpenWebMath.
For the entire Proof-Pile-2, cite
For the ArXiv subset, cite
For OpenWebMath, cite
|
[
"### Schema\n\n\nEach dataset row has the following structure",
"### Dataset Contents\n\n\nFor detailed documentation of the ArXiv and web subsets, refer to RedPajama and OpenWebMath. The following table enumerates the contents of the AlgebraicStack by programming language. The AlgebraicStack is filtered to only include documents that contain mathematics, as judged by hand-crafted, language-specific heuristics.",
"### License\n\n\nWe do not alter the license of any of the underlying data.",
"### Version History\n\n\nv1.1.0: Contains an updated version of OpenWebMath, precisely the one available at open-web-math/open-web-math. This version of OpenWebMath has slightly improved filtering, for example, removal of very short documents.\n\n\nv1.0.0: The data used to train the Llemma 7B and Llemma 34B. Uses a development version of OpenWebMath.\n\n\nFor the entire Proof-Pile-2, cite\n\n\nFor the ArXiv subset, cite\n\n\nFor OpenWebMath, cite"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-10B<n<100B #language-English #math #arxiv-2310.10631 #arxiv-2310.06786 #region-us \n",
"### Schema\n\n\nEach dataset row has the following structure",
"### Dataset Contents\n\n\nFor detailed documentation of the ArXiv and web subsets, refer to RedPajama and OpenWebMath. The following table enumerates the contents of the AlgebraicStack by programming language. The AlgebraicStack is filtered to only include documents that contain mathematics, as judged by hand-crafted, language-specific heuristics.",
"### License\n\n\nWe do not alter the license of any of the underlying data.",
"### Version History\n\n\nv1.1.0: Contains an updated version of OpenWebMath, precisely the one available at open-web-math/open-web-math. This version of OpenWebMath has slightly improved filtering, for example, removal of very short documents.\n\n\nv1.0.0: The data used to train the Llemma 7B and Llemma 34B. Uses a development version of OpenWebMath.\n\n\nFor the entire Proof-Pile-2, cite\n\n\nFor the ArXiv subset, cite\n\n\nFor OpenWebMath, cite"
] |
[
53,
13,
87,
18,
117
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-10B<n<100B #language-English #math #arxiv-2310.10631 #arxiv-2310.06786 #region-us \n### Schema\n\n\nEach dataset row has the following structure### Dataset Contents\n\n\nFor detailed documentation of the ArXiv and web subsets, refer to RedPajama and OpenWebMath. The following table enumerates the contents of the AlgebraicStack by programming language. The AlgebraicStack is filtered to only include documents that contain mathematics, as judged by hand-crafted, language-specific heuristics.### License\n\n\nWe do not alter the license of any of the underlying data.### Version History\n\n\nv1.1.0: Contains an updated version of OpenWebMath, precisely the one available at open-web-math/open-web-math. This version of OpenWebMath has slightly improved filtering, for example, removal of very short documents.\n\n\nv1.0.0: The data used to train the Llemma 7B and Llemma 34B. Uses a development version of OpenWebMath.\n\n\nFor the entire Proof-Pile-2, cite\n\n\nFor the ArXiv subset, cite\n\n\nFor OpenWebMath, cite"
] |
ed50b405ad38bff6440fff92f4a45284538c70d2
|
# Dataset Card for Evaluation run of Severian/ANIMA-Phi-Neptune-Mistral-7B-v4
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-7B-v4
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Severian/ANIMA-Phi-Neptune-Mistral-7B-v4](https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-7B-v4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-7B-v4",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-28T20:28:28.700078](https://huggingface.co/datasets/open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-7B-v4/blob/main/results_2023-10-28T20-28-28.700078.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.10329278523489933,
"em_stderr": 0.003116735713102519,
"f1": 0.1624748322147643,
"f1_stderr": 0.003266242273162539,
"acc": 0.442081101118795,
"acc_stderr": 0.011112320094960076
},
"harness|drop|3": {
"em": 0.10329278523489933,
"em_stderr": 0.003116735713102519,
"f1": 0.1624748322147643,
"f1_stderr": 0.003266242273162539
},
"harness|gsm8k|5": {
"acc": 0.14935557240333586,
"acc_stderr": 0.009818090723727293
},
"harness|winogrande|5": {
"acc": 0.7348066298342542,
"acc_stderr": 0.01240654946619286
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-7B-v4
|
[
"region:us"
] |
2023-10-11T23:22:51+00:00
|
{"pretty_name": "Evaluation run of Severian/ANIMA-Phi-Neptune-Mistral-7B-v4", "dataset_summary": "Dataset automatically created during the evaluation run of model [Severian/ANIMA-Phi-Neptune-Mistral-7B-v4](https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-7B-v4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-7B-v4\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T20:28:28.700078](https://huggingface.co/datasets/open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-7B-v4/blob/main/results_2023-10-28T20-28-28.700078.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.10329278523489933,\n \"em_stderr\": 0.003116735713102519,\n \"f1\": 0.1624748322147643,\n \"f1_stderr\": 0.003266242273162539,\n \"acc\": 0.442081101118795,\n \"acc_stderr\": 0.011112320094960076\n },\n \"harness|drop|3\": {\n \"em\": 0.10329278523489933,\n \"em_stderr\": 0.003116735713102519,\n \"f1\": 0.1624748322147643,\n \"f1_stderr\": 0.003266242273162539\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.14935557240333586,\n \"acc_stderr\": 0.009818090723727293\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7348066298342542,\n \"acc_stderr\": 0.01240654946619286\n }\n}\n```", "repo_url": "https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-7B-v4", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|arc:challenge|25_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T20_28_28.700078", "path": ["**/details_harness|drop|3_2023-10-28T20-28-28.700078.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T20-28-28.700078.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T20_28_28.700078", "path": ["**/details_harness|gsm8k|5_2023-10-28T20-28-28.700078.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T20-28-28.700078.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hellaswag|10_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-12T00-22-26.630693.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-12T00-22-26.630693.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-12T00-22-26.630693.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T20_28_28.700078", "path": ["**/details_harness|winogrande|5_2023-10-28T20-28-28.700078.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T20-28-28.700078.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_12T00_22_26.630693", "path": ["results_2023-10-12T00-22-26.630693.parquet"]}, {"split": "2023_10_28T20_28_28.700078", "path": ["results_2023-10-28T20-28-28.700078.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T20-28-28.700078.parquet"]}]}]}
|
2023-10-28T19:28:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Severian/ANIMA-Phi-Neptune-Mistral-7B-v4
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Severian/ANIMA-Phi-Neptune-Mistral-7B-v4 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-28T20:28:28.700078(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Severian/ANIMA-Phi-Neptune-Mistral-7B-v4",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Severian/ANIMA-Phi-Neptune-Mistral-7B-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T20:28:28.700078(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Severian/ANIMA-Phi-Neptune-Mistral-7B-v4",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Severian/ANIMA-Phi-Neptune-Mistral-7B-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T20:28:28.700078(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
29,
31,
177,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Severian/ANIMA-Phi-Neptune-Mistral-7B-v4## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Severian/ANIMA-Phi-Neptune-Mistral-7B-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T20:28:28.700078(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
cc816d78c893346edfa5cdd23fa5ea9eac3bdd5c
|
### Dataset Sources
- **Repository:** [https://github.com/jind11/MedQA]
- **Paper :** [https://arxiv.org/abs/2009.13081]
## Citation
@article{jin2020disease,
title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={arXiv preprint arXiv:2009.13081},
year={2020}
}
|
cxllin/medinstruct
|
[
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"medical",
"arxiv:2009.13081",
"region:us"
] |
2023-10-11T23:44:06+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "tags": ["medical"]}
|
2023-10-13T15:36:08+00:00
|
[
"2009.13081"
] |
[
"en"
] |
TAGS
#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-apache-2.0 #medical #arxiv-2009.13081 #region-us
|
### Dataset Sources
- Repository: [URL
- Paper : [URL
@article{jin2020disease,
title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={arXiv preprint arXiv:2009.13081},
year={2020}
}
|
[
"### Dataset Sources\n\n- Repository: [URL\n- Paper : [URL\n\n@article{jin2020disease,\n title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams},\n author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},\n journal={arXiv preprint arXiv:2009.13081},\n year={2020}\n}"
] |
[
"TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-apache-2.0 #medical #arxiv-2009.13081 #region-us \n",
"### Dataset Sources\n\n- Repository: [URL\n- Paper : [URL\n\n@article{jin2020disease,\n title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams},\n author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},\n journal={arXiv preprint arXiv:2009.13081},\n year={2020}\n}"
] |
[
53,
121
] |
[
"passage: TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-apache-2.0 #medical #arxiv-2009.13081 #region-us \n### Dataset Sources\n\n- Repository: [URL\n- Paper : [URL\n\n@article{jin2020disease,\n title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams},\n author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},\n journal={arXiv preprint arXiv:2009.13081},\n year={2020}\n}"
] |
a19863a400bced322d86ba21b30dfb7c8bdcdae3
|
# DataOptim
We launch DataOptim, an MLLM benchmark and competition where we aim to find the optimal training data for training Multimodal Large Language Models (MLLMs).
- Project page: http://dataoptim.org
- GitHub: https://github.com/BAAI-DCAI/DataOptim
## Datasets
Currently, the visual instruction tuning data used in the challenge contain 18 public datasets.
More datasets are coming in the future!
|Category|Dataset|Images|Samples|Split|
|:-:|:-:|:-:|:-:|:-:|
|Image captioning|[COCO](https://cocodataset.org/#home)|82783|414113|train|
|Image captioning|[Flickr30K](https://shannon.cs.illinois.edu/DenotationGraph/)|29000|145000|Karpathy train split|
|Image captioning|[TextCaps](https://textvqa.org/textcaps/)|21953|109765|train|
|Visual question answering|[VQAv2](https://visualqa.org/)|82783|443757|train|
|Visual question answering|[OKVQA](https://okvqa.allenai.org/)|8998|9009|train|
|Visual question answering|[OCRVQA](https://ocr-vqa.github.io/)|166041|801673|train|
|Visual question answering|[GQA](https://cs.stanford.edu/people/dorarad/gqa/index.html)|72140|943000|train|
|Visual question answering|[TextVQA](https://textvqa.org/)|21953|34602|train|
|Visual question answering|[A-OKVQA](https://allenai.org/project/a-okvqa/home)|16540|17056|train|
|Visual question answering|[ScienceQA](https://scienceqa.github.io/)|6218|6218|train|
|Visual question answering|[Visual Genome QA (VGQA)](https://homes.cs.washington.edu/~ranjay/visualgenome/index.html)|99280|1445322|-|
|Visual question answering|[DocVQA](https://www.docvqa.org/)|10194|39463|train|
|Visual question answering|[DVQA](https://github.com/kushalkafle/DVQA_dataset)|200000|2325316|train|
|Grounding|[RefCOCO/RefCOCO+/RefCOCOg](https://github.com/lichengunc/refer)|24407|287604|train|
|Grounding|[Shikra-RD](https://github.com/shikras/shikra)|883|5922|train|
|GPT-4 generated|[LLaVA-Instruct-150K](https://github.com/haotian-liu/LLaVA/blob/main/docs/Data.md)|81479|157712|-|
|GPT-4 generated|[SVIT](https://github.com/BAAI-DCAI/Visual-Instruction-Tuning)|108076|2992799|-|
|GPT-4V generated|[ShareGPT-4V](https://sharegpt4v.github.io/)|87296|102025|-|
|Mixed|[LLaVA-v1.5](https://github.com/haotian-liu/LLaVA/tree/main#visual-instruction-tuning)<sup>1</sup>|291684|665298|-|
|Total||974K<sup>2</sup>|11.2M|
<sup>1</sup> The bounding boxes in LLaVA-v1.5 are based on the padded image. You can find the discussion [here](https://github.com/haotian-liu/LLaVA/issues/606).
<sup>2</sup> The number of images are counted based on image IDs.
There might be duplicate images across different image sources, such as COCO 2014/2017, Visual Genome, etc.
We use different strategies to collect the prompts for different tasks.
- **Image captioning.** We carefully collect 5 manually written instructions and randomly sample one as the prompt for each caption. The fourth and fifth instructions are from [InstructBLIP](https://github.com/salesforce/LAVIS/blob/main/projects/instructblip/README.md).
- **Open-ended VQA.** As the answers in VQA datasets are generally short, we add an instruction after the question to ask the model to provide answers with a short sentence or phrase.
- **Multiple-choice VQA.** For A-OKVQA, we add an instruction before the question to ask the model to provide answers with correct options. For ScienceQA, we use the instructions and templates designed by [M3IT](https://m3-it.github.io/) and randomly sample one to format the prompt. Only data with image context are involved.
- **Grounding.** For RefCOCO/RefCOCO+/RefCOCOg, we use the data and templates in [Shikra](https://github.com/shikras/shikra) and randomly sample one to format the prompt.
- **GPT-4/GPT-4V generated & mixed datasets.** We keep the prompts unchanged.
|Category|Data|Prompts|
|:-:|:-:|:-:|
|Image captioning|COCO, Flickr30K, TextCaps|Describe the image as simply as possible with a sentence or phrase.<br />Give a brief summary of what you see.<br />Provide a short description of the image.<br />Write a short description for the image.<br />Briefly describe the content of the image.|
|Open-ended VQA|VQAv2, OKVQA, OCRVQA, GQA, TextVQA, VGQA, DocVQA, DVQA|*question* Answer the question directly with a short sentence or phrase.|
|Multiple-choice VQA|A-OKVQA|Choose the correct option for the following question: *question*|
For now, the visual instruction tuning data are formatted in the training format of [LLaVA](https://github.com/haotian-liu/LLaVA) in [data](https://huggingface.co/datasets/BAAI/DataOptim/tree/main/data) folder. The images could be found in [images](https://huggingface.co/datasets/BAAI/DataOptim/tree/main/images) folder or the their official websites. The images should not be used for other purpose and should comply with the original licenses. They may be taken down at any time when requested by the dataset owners.
|
BAAI/DataOptim
|
[
"task_categories:visual-question-answering",
"size_categories:1M<n<10M",
"language:en",
"region:us"
] |
2023-10-12T00:30:44+00:00
|
{"language": ["en"], "size_categories": ["1M<n<10M"], "task_categories": ["visual-question-answering"], "pretty_name": "DataOptim"}
|
2023-12-15T06:41:48+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-visual-question-answering #size_categories-1M<n<10M #language-English #region-us
|
DataOptim
=========
We launch DataOptim, an MLLM benchmark and competition where we aim to find the optimal training data for training Multimodal Large Language Models (MLLMs).
* Project page: URL
* GitHub: URL
Datasets
--------
Currently, the visual instruction tuning data used in the challenge contain 18 public datasets.
More datasets are coming in the future!
1 The bounding boxes in LLaVA-v1.5 are based on the padded image. You can find the discussion here.
2 The number of images are counted based on image IDs.
There might be duplicate images across different image sources, such as COCO 2014/2017, Visual Genome, etc.
We use different strategies to collect the prompts for different tasks.
* Image captioning. We carefully collect 5 manually written instructions and randomly sample one as the prompt for each caption. The fourth and fifth instructions are from InstructBLIP.
* Open-ended VQA. As the answers in VQA datasets are generally short, we add an instruction after the question to ask the model to provide answers with a short sentence or phrase.
* Multiple-choice VQA. For A-OKVQA, we add an instruction before the question to ask the model to provide answers with correct options. For ScienceQA, we use the instructions and templates designed by M3IT and randomly sample one to format the prompt. Only data with image context are involved.
* Grounding. For RefCOCO/RefCOCO+/RefCOCOg, we use the data and templates in Shikra and randomly sample one to format the prompt.
* GPT-4/GPT-4V generated & mixed datasets. We keep the prompts unchanged.
For now, the visual instruction tuning data are formatted in the training format of LLaVA in data folder. The images could be found in images folder or the their official websites. The images should not be used for other purpose and should comply with the original licenses. They may be taken down at any time when requested by the dataset owners.
|
[] |
[
"TAGS\n#task_categories-visual-question-answering #size_categories-1M<n<10M #language-English #region-us \n"
] |
[
37
] |
[
"passage: TAGS\n#task_categories-visual-question-answering #size_categories-1M<n<10M #language-English #region-us \n"
] |
bcf4671bc0af566e113713a5fdc1dc322a73b7f5
|
# Dataset Card for "dataset-generator-cmb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chris-buenrostro/dataset-generator-cmb
|
[
"region:us"
] |
2023-10-12T00:34:23+00:00
|
{"dataset_info": {"features": [{"name": "product", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "marketing_email", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19952, "num_examples": 10}], "download_size": 26112, "dataset_size": 19952}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T00:34:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dataset-generator-cmb"
More Information needed
|
[
"# Dataset Card for \"dataset-generator-cmb\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dataset-generator-cmb\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dataset-generator-cmb\"\n\nMore Information needed"
] |
e34c6fe1c10be3f125967f8fa0e71bed9bb2cb46
|
# Dataset Card for "news_dataset_for_hdbscan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
iara-project/news_dataset_for_hdbscan
|
[
"region:us"
] |
2023-10-12T00:34:44+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "category_natural_language", "dtype": "string"}, {"name": "link", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 77001539, "num_examples": 21933}, {"name": "test", "num_bytes": 76974994, "num_examples": 21933}], "download_size": 96118980, "dataset_size": 153976533}}
|
2023-10-12T00:43:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "news_dataset_for_hdbscan"
More Information needed
|
[
"# Dataset Card for \"news_dataset_for_hdbscan\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"news_dataset_for_hdbscan\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"news_dataset_for_hdbscan\"\n\nMore Information needed"
] |
1c2562b5f74d3774805a022b8183d85db1f7fd88
|
# Dataset Card for "marketing-synthetic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rdiazconcha/marketing-synthetic
|
[
"region:us"
] |
2023-10-12T00:39:41+00:00
|
{"dataset_info": {"features": [{"name": "product", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "marketing_email", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20249, "num_examples": 10}], "download_size": 27613, "dataset_size": 20249}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T00:39:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "marketing-synthetic"
More Information needed
|
[
"# Dataset Card for \"marketing-synthetic\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"marketing-synthetic\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"marketing-synthetic\"\n\nMore Information needed"
] |
143842ba71e824a028572a89c9a522cbca40726e
|
# FakeRecogna
FakeRecogna is a dataset comprised of real and fake news. The real news is not directly linked to fake news and vice-versa, which could lead to a biased classification. The news collection was performed by crawlers developed for mining pages of well-known and of great national importance agency news. The web crawlers were developed based on each analyzed webpage, where the extracted information is first separated into categories and then grouped by dates. The plurality of news on several pages and the different writing styles provide the dataset with great diversity for natural language processing analysis and machine learning algorithms.
## The Dataset
The news collection was performed by crawlers developed for mining pages of well-known and of great national importance agency news. The fake news mining was mainly focused on pages mentioned by the [Duke Reporters Lab](https://reporterslab.org/fact-checking/), which provides a list of pages that verify the veracity of news worldwide.There were 160 active fact-checking agencies in the world in 2019 and Brazil figures as a growing ecosystem with currently 9 initiatives and there were considered 6 out of the 9 pages during search with a great variation in the number of fake news extracted from each one, ending in 5,951 samples. Table 1 presents the current initiatives as well as the number of fake news collected from each source.
| Fact-Check Agency | Web address | # News |
| ------------------ | ------------------------------------ | ------ |
| Boatos.org | https://boatos.org | 2,605 |
| Fato ou Fake | https://oglobo.globo.com/fato-ou-fake| 1,055 |
| E-farsas | https://www.e-farsas.com | 812 |
| UOL Confere | https://noticias.uol.com.br/confere | 582 |
| AFP Checamos | https://checamos.afp.com/afp-brasil | 509 |
| Projeto Comprova | https://checamos.afp.com/afp-brasil | 388 |
| Total | -------------------------------------| 5,951 |
Concerning the real news, the crawlers searched portals such as [G1](https://g1.globo.com/), [UOL](https://www.uol.com.br/) and [Extra](https://extra.globo.com/), which are publicly recognized as reliable news outlets, besides the [Ministry of Health of Brazil](https://www.gov.br/saude/pt-br) home page, resulting in a collection of over 100,000 samples. From this set, there were filtered out 5,951 samples to keep the balance between classes and, thus, resulting in a dataset comprised of 11,902 samples.
## More informations
The FakeRecogna dataset is available at GitHub as a single XLSX file that contains 8 columns for the metadata, and each row stands for a sample (real or fake news), as described in Table 2.
| Columns | Description |
| ------------------------ | ------------------------------------------ |
| Title | Title of article |
| Sub-title (if available) | Brief description of news |
| News | Information about the article |
| Category | News grouped according to your information |
| Author | Publication author |
| Date | Publication date |
| URL | Article web address |
| Class | 0 for fake news and 1 for real news |
The collected texts are distributed into six categories in relation to their main subjects: Brazil, Entertainment, Health, Politics, Science, and World. These categories are defined based on the journal sections where the news were extracted. The distribution of news by category and its percentages are described in Table 3.
| Category | # News | % |
| -------------- | ---------- | ------ |
| Brazil | 904 | 7.6 |
| Entertainment | 1,409 | 12.00 |
| Health | 4,456 | 37.4 |
| Politics | 3.951 | 33.1 |
| Science | 602 | 5.1 |
| World | 580 | 4.9 |
| Total | 11,902 | 100.00 |
# Citation
@aInProceedings{garcia2022fakerecogna,
author="Garcia, Gabriel L and Afonso, Luis CS and Papa, Jo{\~a}o P}",
title="Fakerecogna: A new brazilian corpus for fake news detection",
booktitle="International Conference on Computational Processing of the Portuguese Language",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="57--67",
isbn="978-3-030-98305-5"}
|
recogna-nlp/FakeRecogna
|
[
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:pt",
"license:mit",
"FakeRecogna ",
"Fake News",
"Portuguese",
"Dataset",
"region:us"
] |
2023-10-12T01:26:50+00:00
|
{"language": ["pt"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "tags": ["FakeRecogna ", "Fake News", "Portuguese", "Dataset"]}
|
2023-12-07T19:36:39+00:00
|
[] |
[
"pt"
] |
TAGS
#task_categories-text-classification #size_categories-10K<n<100K #language-Portuguese #license-mit #FakeRecogna #Fake News #Portuguese #Dataset #region-us
|
FakeRecogna
===========
FakeRecogna is a dataset comprised of real and fake news. The real news is not directly linked to fake news and vice-versa, which could lead to a biased classification. The news collection was performed by crawlers developed for mining pages of well-known and of great national importance agency news. The web crawlers were developed based on each analyzed webpage, where the extracted information is first separated into categories and then grouped by dates. The plurality of news on several pages and the different writing styles provide the dataset with great diversity for natural language processing analysis and machine learning algorithms.
The Dataset
-----------
The news collection was performed by crawlers developed for mining pages of well-known and of great national importance agency news. The fake news mining was mainly focused on pages mentioned by the Duke Reporters Lab, which provides a list of pages that verify the veracity of news worldwide.There were 160 active fact-checking agencies in the world in 2019 and Brazil figures as a growing ecosystem with currently 9 initiatives and there were considered 6 out of the 9 pages during search with a great variation in the number of fake news extracted from each one, ending in 5,951 samples. Table 1 presents the current initiatives as well as the number of fake news collected from each source.
Fact-Check Agency: URL, Web address: URL, # News: 2,605
Fact-Check Agency: Fato ou Fake, Web address: URL 1,055, # News:
Fact-Check Agency: E-farsas, Web address: URL, # News: 812
Fact-Check Agency: UOL Confere, Web address: URL, # News: 582
Fact-Check Agency: AFP Checamos, Web address: URL, # News: 509
Fact-Check Agency: Projeto Comprova, Web address: URL, # News: 388
Fact-Check Agency: Total, Web address: -------------------------------------, # News: 5,951
Concerning the real news, the crawlers searched portals such as G1, UOL and Extra, which are publicly recognized as reliable news outlets, besides the Ministry of Health of Brazil home page, resulting in a collection of over 100,000 samples. From this set, there were filtered out 5,951 samples to keep the balance between classes and, thus, resulting in a dataset comprised of 11,902 samples.
More informations
-----------------
The FakeRecogna dataset is available at GitHub as a single XLSX file that contains 8 columns for the metadata, and each row stands for a sample (real or fake news), as described in Table 2.
The collected texts are distributed into six categories in relation to their main subjects: Brazil, Entertainment, Health, Politics, Science, and World. These categories are defined based on the journal sections where the news were extracted. The distribution of news by category and its percentages are described in Table 3.
Category: Brazil, # News: 904, %: 7.6
Category: Entertainment, # News: 1,409, %: 12.00
Category: Health, # News: 4,456, %: 37.4
Category: Politics, # News: 3.951, %: 33.1
Category: Science, # News: 602, %: 5.1
Category: World, # News: 580, %: 4.9
Category: Total, # News: 11,902, %: 100.00
@aInProceedings{garcia2022fakerecogna,
author="Garcia, Gabriel L and Afonso, Luis CS and Papa, Jo{~a}o P}",
title="Fakerecogna: A new brazilian corpus for fake news detection",
booktitle="International Conference on Computational Processing of the Portuguese Language",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="57--67",
isbn="978-3-030-98305-5"}
|
[
"# News: 2,605\nFact-Check Agency: Fato ou Fake, Web address: URL 1,055, # News: \nFact-Check Agency: E-farsas, Web address: URL, # News: 812\nFact-Check Agency: UOL Confere, Web address: URL, # News: 582\nFact-Check Agency: AFP Checamos, Web address: URL, # News: 509\nFact-Check Agency: Projeto Comprova, Web address: URL, # News: 388\nFact-Check Agency: Total, Web address: -------------------------------------, # News: 5,951\n\n\nConcerning the real news, the crawlers searched portals such as G1, UOL and Extra, which are publicly recognized as reliable news outlets, besides the Ministry of Health of Brazil home page, resulting in a collection of over 100,000 samples. From this set, there were filtered out 5,951 samples to keep the balance between classes and, thus, resulting in a dataset comprised of 11,902 samples.\n\n\nMore informations\n-----------------\n\n\nThe FakeRecogna dataset is available at GitHub as a single XLSX file that contains 8 columns for the metadata, and each row stands for a sample (real or fake news), as described in Table 2.\n\n\n\nThe collected texts are distributed into six categories in relation to their main subjects: Brazil, Entertainment, Health, Politics, Science, and World. These categories are defined based on the journal sections where the news were extracted. The distribution of news by category and its percentages are described in Table 3.\n\n\nCategory: Brazil, # News: 904, %: 7.6\nCategory: Entertainment, # News: 1,409, %: 12.00\nCategory: Health, # News: 4,456, %: 37.4\nCategory: Politics, # News: 3.951, %: 33.1\nCategory: Science, # News: 602, %: 5.1\nCategory: World, # News: 580, %: 4.9\nCategory: Total, # News: 11,902, %: 100.00\n\n\n@aInProceedings{garcia2022fakerecogna,\nauthor=\"Garcia, Gabriel L and Afonso, Luis CS and Papa, Jo{~a}o P}\",\ntitle=\"Fakerecogna: A new brazilian corpus for fake news detection\",\nbooktitle=\"International Conference on Computational Processing of the Portuguese Language\",\nyear=\"2022\",\npublisher=\"Springer International Publishing\",\naddress=\"Cham\",\npages=\"57--67\",\nisbn=\"978-3-030-98305-5\"}"
] |
[
"TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-Portuguese #license-mit #FakeRecogna #Fake News #Portuguese #Dataset #region-us \n",
"# News: 2,605\nFact-Check Agency: Fato ou Fake, Web address: URL 1,055, # News: \nFact-Check Agency: E-farsas, Web address: URL, # News: 812\nFact-Check Agency: UOL Confere, Web address: URL, # News: 582\nFact-Check Agency: AFP Checamos, Web address: URL, # News: 509\nFact-Check Agency: Projeto Comprova, Web address: URL, # News: 388\nFact-Check Agency: Total, Web address: -------------------------------------, # News: 5,951\n\n\nConcerning the real news, the crawlers searched portals such as G1, UOL and Extra, which are publicly recognized as reliable news outlets, besides the Ministry of Health of Brazil home page, resulting in a collection of over 100,000 samples. From this set, there were filtered out 5,951 samples to keep the balance between classes and, thus, resulting in a dataset comprised of 11,902 samples.\n\n\nMore informations\n-----------------\n\n\nThe FakeRecogna dataset is available at GitHub as a single XLSX file that contains 8 columns for the metadata, and each row stands for a sample (real or fake news), as described in Table 2.\n\n\n\nThe collected texts are distributed into six categories in relation to their main subjects: Brazil, Entertainment, Health, Politics, Science, and World. These categories are defined based on the journal sections where the news were extracted. The distribution of news by category and its percentages are described in Table 3.\n\n\nCategory: Brazil, # News: 904, %: 7.6\nCategory: Entertainment, # News: 1,409, %: 12.00\nCategory: Health, # News: 4,456, %: 37.4\nCategory: Politics, # News: 3.951, %: 33.1\nCategory: Science, # News: 602, %: 5.1\nCategory: World, # News: 580, %: 4.9\nCategory: Total, # News: 11,902, %: 100.00\n\n\n@aInProceedings{garcia2022fakerecogna,\nauthor=\"Garcia, Gabriel L and Afonso, Luis CS and Papa, Jo{~a}o P}\",\ntitle=\"Fakerecogna: A new brazilian corpus for fake news detection\",\nbooktitle=\"International Conference on Computational Processing of the Portuguese Language\",\nyear=\"2022\",\npublisher=\"Springer International Publishing\",\naddress=\"Cham\",\npages=\"57--67\",\nisbn=\"978-3-030-98305-5\"}"
] |
[
57,
567
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-Portuguese #license-mit #FakeRecogna #Fake News #Portuguese #Dataset #region-us \n"
] |
9c5d42943160c5562c01a993ca6718cf928e6978
|
# Dataset Card for "radio-llama2-resp_tag"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rewcifer/radio-llama2-resp_tag
|
[
"region:us"
] |
2023-10-12T01:40:23+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5416871, "num_examples": 1000}], "download_size": 1250589, "dataset_size": 5416871}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T01:40:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "radio-llama2-resp_tag"
More Information needed
|
[
"# Dataset Card for \"radio-llama2-resp_tag\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"radio-llama2-resp_tag\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"radio-llama2-resp_tag\"\n\nMore Information needed"
] |
aa1846b51c03086cab5334fd607ecdf6dcd682fd
|
# Dataset for Shared Knowledge Lifelong Learning
Original Paper: [TMLR] Lightweight Learner for Shared Knowledge Lifelong Learning
### [Project Page](http://ilab.usc.edu/andy/skill) | [SKILL-102 dataset](http://ilab.usc.edu/andy/skill102) | [Paper](https://openreview.net/pdf?id=Jjl2c8kWUc) | [Github](https://github.com/gyhandy/Shared-Knowledge-Lifelong-Learning)
> **Lightweight Learner for Shared Knowledge Lifelong Learning** <br>
> Yunhao Ge, Yuecheng Li, Di Wu, Ao Xu, Adam M. Jones, Amanda Sofie Rios, Iordanis Fostiropoulos, shixian wen, Po-Hsuan Huang, Zachary William Murdock, Gozde Sahin, Shuo Ni, Kiran Lekkala, Sumedh Anand Sontakke, Laurent Itti <br>
> *Transactions on Machine Learning Research*
|
Harry-Li-27/SKILL
|
[
"region:us"
] |
2023-10-12T02:16:35+00:00
|
{}
|
2023-12-13T13:35:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset for Shared Knowledge Lifelong Learning
Original Paper: [TMLR] Lightweight Learner for Shared Knowledge Lifelong Learning
### Project Page | SKILL-102 dataset | Paper | Github
> Lightweight Learner for Shared Knowledge Lifelong Learning <br>
> Yunhao Ge, Yuecheng Li, Di Wu, Ao Xu, Adam M. Jones, Amanda Sofie Rios, Iordanis Fostiropoulos, shixian wen, Po-Hsuan Huang, Zachary William Murdock, Gozde Sahin, Shuo Ni, Kiran Lekkala, Sumedh Anand Sontakke, Laurent Itti <br>
> *Transactions on Machine Learning Research*
|
[
"# Dataset for Shared Knowledge Lifelong Learning\nOriginal Paper: [TMLR] Lightweight Learner for Shared Knowledge Lifelong Learning",
"### Project Page | SKILL-102 dataset | Paper | Github\n\n> Lightweight Learner for Shared Knowledge Lifelong Learning <br>\n> Yunhao Ge, Yuecheng Li, Di Wu, Ao Xu, Adam M. Jones, Amanda Sofie Rios, Iordanis Fostiropoulos, shixian wen, Po-Hsuan Huang, Zachary William Murdock, Gozde Sahin, Shuo Ni, Kiran Lekkala, Sumedh Anand Sontakke, Laurent Itti <br>\n> *Transactions on Machine Learning Research*"
] |
[
"TAGS\n#region-us \n",
"# Dataset for Shared Knowledge Lifelong Learning\nOriginal Paper: [TMLR] Lightweight Learner for Shared Knowledge Lifelong Learning",
"### Project Page | SKILL-102 dataset | Paper | Github\n\n> Lightweight Learner for Shared Knowledge Lifelong Learning <br>\n> Yunhao Ge, Yuecheng Li, Di Wu, Ao Xu, Adam M. Jones, Amanda Sofie Rios, Iordanis Fostiropoulos, shixian wen, Po-Hsuan Huang, Zachary William Murdock, Gozde Sahin, Shuo Ni, Kiran Lekkala, Sumedh Anand Sontakke, Laurent Itti <br>\n> *Transactions on Machine Learning Research*"
] |
[
6,
28,
130
] |
[
"passage: TAGS\n#region-us \n# Dataset for Shared Knowledge Lifelong Learning\nOriginal Paper: [TMLR] Lightweight Learner for Shared Knowledge Lifelong Learning### Project Page | SKILL-102 dataset | Paper | Github\n\n> Lightweight Learner for Shared Knowledge Lifelong Learning <br>\n> Yunhao Ge, Yuecheng Li, Di Wu, Ao Xu, Adam M. Jones, Amanda Sofie Rios, Iordanis Fostiropoulos, shixian wen, Po-Hsuan Huang, Zachary William Murdock, Gozde Sahin, Shuo Ni, Kiran Lekkala, Sumedh Anand Sontakke, Laurent Itti <br>\n> *Transactions on Machine Learning Research*"
] |
6e263c5a8285286d6e1a0b33d2bded1e465c7a52
|
# Dataset Card for Evaluation run of Delcos/NATE-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Delcos/NATE-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Delcos/NATE-7b](https://huggingface.co/Delcos/NATE-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Delcos__NATE-7b",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-12T03:21:56.889828](https://huggingface.co/datasets/open-llm-leaderboard/details_Delcos__NATE-7b/blob/main/results_2023-10-12T03-21-56.889828.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5894733182245001,
"acc_stderr": 0.03414332313258469,
"acc_norm": 0.5933830960354635,
"acc_norm_stderr": 0.03412320397463523,
"mc1": 0.3953488372093023,
"mc1_stderr": 0.017115815632418197,
"mc2": 0.571756969256499,
"mc2_stderr": 0.01564827771634302
},
"harness|arc:challenge|25": {
"acc": 0.5784982935153583,
"acc_stderr": 0.014430197069326025,
"acc_norm": 0.6092150170648464,
"acc_norm_stderr": 0.014258563880513778
},
"harness|hellaswag|10": {
"acc": 0.620991834295957,
"acc_stderr": 0.004841486716855774,
"acc_norm": 0.8209520015933081,
"acc_norm_stderr": 0.0038260895866500536
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5111111111111111,
"acc_stderr": 0.04318275491977976,
"acc_norm": 0.5111111111111111,
"acc_norm_stderr": 0.04318275491977976
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5789473684210527,
"acc_stderr": 0.04017901275981749,
"acc_norm": 0.5789473684210527,
"acc_norm_stderr": 0.04017901275981749
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6415094339622641,
"acc_stderr": 0.02951470358398177,
"acc_norm": 0.6415094339622641,
"acc_norm_stderr": 0.02951470358398177
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6180555555555556,
"acc_stderr": 0.040629907841466674,
"acc_norm": 0.6180555555555556,
"acc_norm_stderr": 0.040629907841466674
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5606936416184971,
"acc_stderr": 0.03784271932887467,
"acc_norm": 0.5606936416184971,
"acc_norm_stderr": 0.03784271932887467
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3431372549019608,
"acc_stderr": 0.047240073523838876,
"acc_norm": 0.3431372549019608,
"acc_norm_stderr": 0.047240073523838876
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215055,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215055
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5063829787234042,
"acc_stderr": 0.032683358999363366,
"acc_norm": 0.5063829787234042,
"acc_norm_stderr": 0.032683358999363366
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.32456140350877194,
"acc_stderr": 0.04404556157374767,
"acc_norm": 0.32456140350877194,
"acc_norm_stderr": 0.04404556157374767
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5310344827586206,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.5310344827586206,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.36243386243386244,
"acc_stderr": 0.02475747390275206,
"acc_norm": 0.36243386243386244,
"acc_norm_stderr": 0.02475747390275206
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.043902592653775614,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.043902592653775614
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7032258064516129,
"acc_stderr": 0.025988500792411898,
"acc_norm": 0.7032258064516129,
"acc_norm_stderr": 0.025988500792411898
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.46798029556650245,
"acc_stderr": 0.035107665979592154,
"acc_norm": 0.46798029556650245,
"acc_norm_stderr": 0.035107665979592154
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.703030303030303,
"acc_stderr": 0.03567969772268049,
"acc_norm": 0.703030303030303,
"acc_norm_stderr": 0.03567969772268049
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7676767676767676,
"acc_stderr": 0.030088629490217483,
"acc_norm": 0.7676767676767676,
"acc_norm_stderr": 0.030088629490217483
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8549222797927462,
"acc_stderr": 0.025416343096306433,
"acc_norm": 0.8549222797927462,
"acc_norm_stderr": 0.025416343096306433
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6358974358974359,
"acc_stderr": 0.024396672985094767,
"acc_norm": 0.6358974358974359,
"acc_norm_stderr": 0.024396672985094767
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32222222222222224,
"acc_stderr": 0.028493465091028597,
"acc_norm": 0.32222222222222224,
"acc_norm_stderr": 0.028493465091028597
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6008403361344538,
"acc_stderr": 0.03181110032413926,
"acc_norm": 0.6008403361344538,
"acc_norm_stderr": 0.03181110032413926
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.03861557546255169,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.03861557546255169
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7944954128440367,
"acc_stderr": 0.017324352325016022,
"acc_norm": 0.7944954128440367,
"acc_norm_stderr": 0.017324352325016022
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.42592592592592593,
"acc_stderr": 0.03372343271653064,
"acc_norm": 0.42592592592592593,
"acc_norm_stderr": 0.03372343271653064
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.026756401538078962,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.026756401538078962
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7637130801687764,
"acc_stderr": 0.027652153144159263,
"acc_norm": 0.7637130801687764,
"acc_norm_stderr": 0.027652153144159263
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7040358744394619,
"acc_stderr": 0.030636591348699796,
"acc_norm": 0.7040358744394619,
"acc_norm_stderr": 0.030636591348699796
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6412213740458015,
"acc_stderr": 0.04206739313864908,
"acc_norm": 0.6412213740458015,
"acc_norm_stderr": 0.04206739313864908
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7024793388429752,
"acc_stderr": 0.04173349148083499,
"acc_norm": 0.7024793388429752,
"acc_norm_stderr": 0.04173349148083499
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6871165644171779,
"acc_stderr": 0.03642914578292406,
"acc_norm": 0.6871165644171779,
"acc_norm_stderr": 0.03642914578292406
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4107142857142857,
"acc_stderr": 0.04669510663875191,
"acc_norm": 0.4107142857142857,
"acc_norm_stderr": 0.04669510663875191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7184466019417476,
"acc_stderr": 0.044532548363264673,
"acc_norm": 0.7184466019417476,
"acc_norm_stderr": 0.044532548363264673
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8290598290598291,
"acc_stderr": 0.024662496845209825,
"acc_norm": 0.8290598290598291,
"acc_norm_stderr": 0.024662496845209825
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7752234993614304,
"acc_stderr": 0.014927447101937153,
"acc_norm": 0.7752234993614304,
"acc_norm_stderr": 0.014927447101937153
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.025522474632121615,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.025522474632121615
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.43798882681564244,
"acc_stderr": 0.016593394227564846,
"acc_norm": 0.43798882681564244,
"acc_norm_stderr": 0.016593394227564846
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6535947712418301,
"acc_stderr": 0.02724561304721536,
"acc_norm": 0.6535947712418301,
"acc_norm_stderr": 0.02724561304721536
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6559485530546624,
"acc_stderr": 0.026981478043648043,
"acc_norm": 0.6559485530546624,
"acc_norm_stderr": 0.026981478043648043
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6697530864197531,
"acc_stderr": 0.026168298456732852,
"acc_norm": 0.6697530864197531,
"acc_norm_stderr": 0.026168298456732852
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4574468085106383,
"acc_stderr": 0.029719281272236837,
"acc_norm": 0.4574468085106383,
"acc_norm_stderr": 0.029719281272236837
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.44589308996088656,
"acc_stderr": 0.012695244711379778,
"acc_norm": 0.44589308996088656,
"acc_norm_stderr": 0.012695244711379778
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5661764705882353,
"acc_stderr": 0.03010563657001663,
"acc_norm": 0.5661764705882353,
"acc_norm_stderr": 0.03010563657001663
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5915032679738562,
"acc_stderr": 0.019886221037501862,
"acc_norm": 0.5915032679738562,
"acc_norm_stderr": 0.019886221037501862
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.04494290866252091,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.04494290866252091
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6612244897959184,
"acc_stderr": 0.030299506562154185,
"acc_norm": 0.6612244897959184,
"acc_norm_stderr": 0.030299506562154185
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7810945273631841,
"acc_stderr": 0.029239174636647,
"acc_norm": 0.7810945273631841,
"acc_norm_stderr": 0.029239174636647
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5060240963855421,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.5060240963855421,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7719298245614035,
"acc_stderr": 0.032180937956023566,
"acc_norm": 0.7719298245614035,
"acc_norm_stderr": 0.032180937956023566
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3953488372093023,
"mc1_stderr": 0.017115815632418197,
"mc2": 0.571756969256499,
"mc2_stderr": 0.01564827771634302
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Delcos__NATE-7b
|
[
"region:us"
] |
2023-10-12T02:22:21+00:00
|
{"pretty_name": "Evaluation run of Delcos/NATE-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [Delcos/NATE-7b](https://huggingface.co/Delcos/NATE-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Delcos__NATE-7b\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-12T03:21:56.889828](https://huggingface.co/datasets/open-llm-leaderboard/details_Delcos__NATE-7b/blob/main/results_2023-10-12T03-21-56.889828.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5894733182245001,\n \"acc_stderr\": 0.03414332313258469,\n \"acc_norm\": 0.5933830960354635,\n \"acc_norm_stderr\": 0.03412320397463523,\n \"mc1\": 0.3953488372093023,\n \"mc1_stderr\": 0.017115815632418197,\n \"mc2\": 0.571756969256499,\n \"mc2_stderr\": 0.01564827771634302\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5784982935153583,\n \"acc_stderr\": 0.014430197069326025,\n \"acc_norm\": 0.6092150170648464,\n \"acc_norm_stderr\": 0.014258563880513778\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.620991834295957,\n \"acc_stderr\": 0.004841486716855774,\n \"acc_norm\": 0.8209520015933081,\n \"acc_norm_stderr\": 0.0038260895866500536\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5111111111111111,\n \"acc_stderr\": 0.04318275491977976,\n \"acc_norm\": 0.5111111111111111,\n \"acc_norm_stderr\": 0.04318275491977976\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5789473684210527,\n \"acc_stderr\": 0.04017901275981749,\n \"acc_norm\": 0.5789473684210527,\n \"acc_norm_stderr\": 0.04017901275981749\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6415094339622641,\n \"acc_stderr\": 0.02951470358398177,\n \"acc_norm\": 0.6415094339622641,\n \"acc_norm_stderr\": 0.02951470358398177\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6180555555555556,\n \"acc_stderr\": 0.040629907841466674,\n \"acc_norm\": 0.6180555555555556,\n \"acc_norm_stderr\": 0.040629907841466674\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5606936416184971,\n \"acc_stderr\": 0.03784271932887467,\n \"acc_norm\": 0.5606936416184971,\n \"acc_norm_stderr\": 0.03784271932887467\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3431372549019608,\n \"acc_stderr\": 0.047240073523838876,\n \"acc_norm\": 0.3431372549019608,\n \"acc_norm_stderr\": 0.047240073523838876\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215055,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215055\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5063829787234042,\n \"acc_stderr\": 0.032683358999363366,\n \"acc_norm\": 0.5063829787234042,\n \"acc_norm_stderr\": 0.032683358999363366\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.32456140350877194,\n \"acc_stderr\": 0.04404556157374767,\n \"acc_norm\": 0.32456140350877194,\n \"acc_norm_stderr\": 0.04404556157374767\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5310344827586206,\n \"acc_stderr\": 0.04158632762097828,\n \"acc_norm\": 0.5310344827586206,\n \"acc_norm_stderr\": 0.04158632762097828\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.36243386243386244,\n \"acc_stderr\": 0.02475747390275206,\n \"acc_norm\": 0.36243386243386244,\n \"acc_norm_stderr\": 0.02475747390275206\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.043902592653775614,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.043902592653775614\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7032258064516129,\n \"acc_stderr\": 0.025988500792411898,\n \"acc_norm\": 0.7032258064516129,\n \"acc_norm_stderr\": 0.025988500792411898\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.46798029556650245,\n \"acc_stderr\": 0.035107665979592154,\n \"acc_norm\": 0.46798029556650245,\n \"acc_norm_stderr\": 0.035107665979592154\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.703030303030303,\n \"acc_stderr\": 0.03567969772268049,\n \"acc_norm\": 0.703030303030303,\n \"acc_norm_stderr\": 0.03567969772268049\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7676767676767676,\n \"acc_stderr\": 0.030088629490217483,\n \"acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.030088629490217483\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8549222797927462,\n \"acc_stderr\": 0.025416343096306433,\n \"acc_norm\": 0.8549222797927462,\n \"acc_norm_stderr\": 0.025416343096306433\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6358974358974359,\n \"acc_stderr\": 0.024396672985094767,\n \"acc_norm\": 0.6358974358974359,\n \"acc_norm_stderr\": 0.024396672985094767\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.32222222222222224,\n \"acc_stderr\": 0.028493465091028597,\n \"acc_norm\": 0.32222222222222224,\n \"acc_norm_stderr\": 0.028493465091028597\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6008403361344538,\n \"acc_stderr\": 0.03181110032413926,\n \"acc_norm\": 0.6008403361344538,\n \"acc_norm_stderr\": 0.03181110032413926\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.03861557546255169,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.03861557546255169\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7944954128440367,\n \"acc_stderr\": 0.017324352325016022,\n \"acc_norm\": 0.7944954128440367,\n \"acc_norm_stderr\": 0.017324352325016022\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.42592592592592593,\n \"acc_stderr\": 0.03372343271653064,\n \"acc_norm\": 0.42592592592592593,\n \"acc_norm_stderr\": 0.03372343271653064\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.026756401538078962,\n \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.026756401538078962\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7637130801687764,\n \"acc_stderr\": 0.027652153144159263,\n \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.027652153144159263\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7040358744394619,\n \"acc_stderr\": 0.030636591348699796,\n \"acc_norm\": 0.7040358744394619,\n \"acc_norm_stderr\": 0.030636591348699796\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6412213740458015,\n \"acc_stderr\": 0.04206739313864908,\n \"acc_norm\": 0.6412213740458015,\n \"acc_norm_stderr\": 0.04206739313864908\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7024793388429752,\n \"acc_stderr\": 0.04173349148083499,\n \"acc_norm\": 0.7024793388429752,\n \"acc_norm_stderr\": 0.04173349148083499\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6871165644171779,\n \"acc_stderr\": 0.03642914578292406,\n \"acc_norm\": 0.6871165644171779,\n \"acc_norm_stderr\": 0.03642914578292406\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4107142857142857,\n \"acc_stderr\": 0.04669510663875191,\n \"acc_norm\": 0.4107142857142857,\n \"acc_norm_stderr\": 0.04669510663875191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7184466019417476,\n \"acc_stderr\": 0.044532548363264673,\n \"acc_norm\": 0.7184466019417476,\n \"acc_norm_stderr\": 0.044532548363264673\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8290598290598291,\n \"acc_stderr\": 0.024662496845209825,\n \"acc_norm\": 0.8290598290598291,\n \"acc_norm_stderr\": 0.024662496845209825\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7752234993614304,\n \"acc_stderr\": 0.014927447101937153,\n \"acc_norm\": 0.7752234993614304,\n \"acc_norm_stderr\": 0.014927447101937153\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6589595375722543,\n \"acc_stderr\": 0.025522474632121615,\n \"acc_norm\": 0.6589595375722543,\n \"acc_norm_stderr\": 0.025522474632121615\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.43798882681564244,\n \"acc_stderr\": 0.016593394227564846,\n \"acc_norm\": 0.43798882681564244,\n \"acc_norm_stderr\": 0.016593394227564846\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6535947712418301,\n \"acc_stderr\": 0.02724561304721536,\n \"acc_norm\": 0.6535947712418301,\n \"acc_norm_stderr\": 0.02724561304721536\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6559485530546624,\n \"acc_stderr\": 0.026981478043648043,\n \"acc_norm\": 0.6559485530546624,\n \"acc_norm_stderr\": 0.026981478043648043\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6697530864197531,\n \"acc_stderr\": 0.026168298456732852,\n \"acc_norm\": 0.6697530864197531,\n \"acc_norm_stderr\": 0.026168298456732852\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4574468085106383,\n \"acc_stderr\": 0.029719281272236837,\n \"acc_norm\": 0.4574468085106383,\n \"acc_norm_stderr\": 0.029719281272236837\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.44589308996088656,\n \"acc_stderr\": 0.012695244711379778,\n \"acc_norm\": 0.44589308996088656,\n \"acc_norm_stderr\": 0.012695244711379778\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5661764705882353,\n \"acc_stderr\": 0.03010563657001663,\n \"acc_norm\": 0.5661764705882353,\n \"acc_norm_stderr\": 0.03010563657001663\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5915032679738562,\n \"acc_stderr\": 0.019886221037501862,\n \"acc_norm\": 0.5915032679738562,\n \"acc_norm_stderr\": 0.019886221037501862\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.04494290866252091,\n \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.04494290866252091\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6612244897959184,\n \"acc_stderr\": 0.030299506562154185,\n \"acc_norm\": 0.6612244897959184,\n \"acc_norm_stderr\": 0.030299506562154185\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7810945273631841,\n \"acc_stderr\": 0.029239174636647,\n \"acc_norm\": 0.7810945273631841,\n \"acc_norm_stderr\": 0.029239174636647\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.5060240963855421,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7719298245614035,\n \"acc_stderr\": 0.032180937956023566,\n \"acc_norm\": 0.7719298245614035,\n \"acc_norm_stderr\": 0.032180937956023566\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3953488372093023,\n \"mc1_stderr\": 0.017115815632418197,\n \"mc2\": 0.571756969256499,\n \"mc2_stderr\": 0.01564827771634302\n }\n}\n```", "repo_url": "https://huggingface.co/Delcos/NATE-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|arc:challenge|25_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hellaswag|10_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-12T03-21-56.889828.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-12T03-21-56.889828.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_12T03_21_56.889828", "path": ["results_2023-10-12T03-21-56.889828.parquet"]}, {"split": "latest", "path": ["results_2023-10-12T03-21-56.889828.parquet"]}]}]}
|
2023-10-12T02:23:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Delcos/NATE-7b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Delcos/NATE-7b on the Open LLM Leaderboard.
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-12T03:21:56.889828(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Delcos/NATE-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Delcos/NATE-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-12T03:21:56.889828(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Delcos/NATE-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Delcos/NATE-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-12T03:21:56.889828(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
16,
31,
164,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Delcos/NATE-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Delcos/NATE-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-12T03:21:56.889828(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
13b4e83e2304600ff65509804b1ed5b8932ca54c
|
# iruca-llama2-1k: Lazy Llama 2 Formatting
This is a subset (1000 samples) of the iruca.ai example dataset, processed to match Llama 2's prompt format as described [in this article](https://huggingface.co/blog/llama2#how-to-prompt-llama-2).
|
xinqiyang/iruca-llama2-1k
|
[
"task_categories:question-answering",
"size_categories:n<1K",
"license:apache-2.0",
"llama2",
"finetune",
"japanese",
"region:us"
] |
2023-10-12T02:30:31+00:00
|
{"license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["question-answering"], "tags": ["llama2", "finetune", "japanese"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1654448, "num_examples": 1000}], "download_size": 966693, "dataset_size": 1654448}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T03:43:20+00:00
|
[] |
[] |
TAGS
#task_categories-question-answering #size_categories-n<1K #license-apache-2.0 #llama2 #finetune #japanese #region-us
|
# iruca-llama2-1k: Lazy Llama 2 Formatting
This is a subset (1000 samples) of the URL example dataset, processed to match Llama 2's prompt format as described in this article.
|
[
"# iruca-llama2-1k: Lazy Llama 2 Formatting\n\nThis is a subset (1000 samples) of the URL example dataset, processed to match Llama 2's prompt format as described in this article."
] |
[
"TAGS\n#task_categories-question-answering #size_categories-n<1K #license-apache-2.0 #llama2 #finetune #japanese #region-us \n",
"# iruca-llama2-1k: Lazy Llama 2 Formatting\n\nThis is a subset (1000 samples) of the URL example dataset, processed to match Llama 2's prompt format as described in this article."
] |
[
47,
50
] |
[
"passage: TAGS\n#task_categories-question-answering #size_categories-n<1K #license-apache-2.0 #llama2 #finetune #japanese #region-us \n# iruca-llama2-1k: Lazy Llama 2 Formatting\n\nThis is a subset (1000 samples) of the URL example dataset, processed to match Llama 2's prompt format as described in this article."
] |
2d90da2e43d2ca269b847d0808fb90613e8af1e4
|
# Dataset Card for "imagenet-1k-rand_blur"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
acozma/imagenet-1k-rand_blur
|
[
"region:us"
] |
2023-10-12T03:00:47+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "params", "struct": [{"name": "func", "dtype": "string"}, {"name": "radius", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 283029903517.0, "num_examples": 500000}], "download_size": 283032983222, "dataset_size": 283029903517.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-31T07:59:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "imagenet-1k-rand_blur"
More Information needed
|
[
"# Dataset Card for \"imagenet-1k-rand_blur\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"imagenet-1k-rand_blur\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"imagenet-1k-rand_blur\"\n\nMore Information needed"
] |
c2ef7a78969e0ed930ab4466b331d62dc1edf155
|
# Dataset Card for "test_multifquad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
manu/multifquad_test
|
[
"region:us"
] |
2023-10-12T03:37:15+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answers_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}, {"name": "is_impossible", "dtype": "bool"}], "splits": [{"name": "test", "num_bytes": 478925, "num_examples": 400}, {"name": "valid", "num_bytes": 123865, "num_examples": 100}], "download_size": 373072, "dataset_size": 602790}}
|
2023-10-12T03:37:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "test_multifquad"
More Information needed
|
[
"# Dataset Card for \"test_multifquad\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"test_multifquad\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"test_multifquad\"\n\nMore Information needed"
] |
d2835870e9a75229534c9d9b52ae684777f52fb6
|
# Dataset Card for "ft-sample-mistral"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
open-phi/ft-sample-mistral
|
[
"region:us"
] |
2023-10-12T04:00:48+00:00
|
{"dataset_info": {"features": [{"name": "topic", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "concepts", "sequence": "string"}, {"name": "outline", "sequence": "string"}, {"name": "markdown", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2856370189, "num_examples": 23650}], "download_size": 937886508, "dataset_size": 2856370189}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-18T03:39:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ft-sample-mistral"
More Information needed
|
[
"# Dataset Card for \"ft-sample-mistral\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ft-sample-mistral\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ft-sample-mistral\"\n\nMore Information needed"
] |
8135609239b03a0ac917ff7454efd49eee9a2318
|
# Dataset Card for "french-30b_separate"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
manu/french-30b_separate
|
[
"region:us"
] |
2023-10-12T04:04:44+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "WmtEnFrTest", "path": "data/WmtEnFrTest-*"}, {"split": "EnglishFrenchWebpagesScrapedTranslatedTest", "path": "data/EnglishFrenchWebpagesScrapedTranslatedTest-*"}, {"split": "FrenchLibrispeechTextOnlyTest", "path": "data/FrenchLibrispeechTextOnlyTest-*"}, {"split": "FrenchPodcastsTest", "path": "data/FrenchPodcastsTest-*"}, {"split": "FrenchOpenSubtitlesTest", "path": "data/FrenchOpenSubtitlesTest-*"}, {"split": "OriginalSongsLyricsWithFrenchTranslationTest", "path": "data/OriginalSongsLyricsWithFrenchTranslationTest-*"}, {"split": "ProjectgutenbergFrTest", "path": "data/ProjectgutenbergFrTest-*"}, {"split": "BnfGallicaTest", "path": "data/BnfGallicaTest-*"}, {"split": "ThesesFr20132023Test", "path": "data/ThesesFr20132023Test-*"}, {"split": "LegiOpendataTest", "path": "data/LegiOpendataTest-*"}, {"split": "BaloOpendataTest", "path": "data/BaloOpendataTest-*"}, {"split": "JadeOpendataTest", "path": "data/JadeOpendataTest-*"}, {"split": "DoleOpendataTest", "path": "data/DoleOpendataTest-*"}, {"split": "SardeOpendataTest", "path": "data/SardeOpendataTest-*"}, {"split": "QrOpendataTest", "path": "data/QrOpendataTest-*"}, {"split": "JorfOpendataTest", "path": "data/JorfOpendataTest-*"}, {"split": "IncaOpendataTest", "path": "data/IncaOpendataTest-*"}, {"split": "AccoOpendataTest", "path": "data/AccoOpendataTest-*"}, {"split": "KaliOpendataTest", "path": "data/KaliOpendataTest-*"}, {"split": "DebatsOpendataTest", "path": "data/DebatsOpendataTest-*"}, {"split": "CnilOpendataTest", "path": "data/CnilOpendataTest-*"}, {"split": "CappOpendataTest", "path": "data/CappOpendataTest-*"}, {"split": "CassOpendataTest", "path": "data/CassOpendataTest-*"}, {"split": "ConstitOpendataTest", "path": "data/ConstitOpendataTest-*"}, {"split": "IlluinLayoutDatasetTextOnlyTest", "path": "data/IlluinLayoutDatasetTextOnlyTest-*"}, {"split": "WikisourceFrTest", "path": "data/WikisourceFrTest-*"}, {"split": "Wikipedia20220301.frTest", "path": "data/Wikipedia20220301.frTest-*"}, {"split": "Oscar2301FrTest", "path": "data/Oscar2301FrTest-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "dataset_id", "dtype": "string"}], "splits": [{"name": "WmtEnFrTest", "num_bytes": 933080, "num_examples": 3003}, {"name": "EnglishFrenchWebpagesScrapedTranslatedTest", "num_bytes": 3557903, "num_examples": 8580}, {"name": "FrenchLibrispeechTextOnlyTest", "num_bytes": 698968, "num_examples": 2582}, {"name": "FrenchPodcastsTest", "num_bytes": 505018, "num_examples": 100}, {"name": "FrenchOpenSubtitlesTest", "num_bytes": 3048714, "num_examples": 100}, {"name": "OriginalSongsLyricsWithFrenchTranslationTest", "num_bytes": 2156145, "num_examples": 756}, {"name": "ProjectgutenbergFrTest", "num_bytes": 39019119, "num_examples": 100}, {"name": "BnfGallicaTest", "num_bytes": 43160730, "num_examples": 100}, {"name": "ThesesFr20132023Test", "num_bytes": 3957037, "num_examples": 959}, {"name": "LegiOpendataTest", "num_bytes": 16589963, "num_examples": 10000}, {"name": "BaloOpendataTest", "num_bytes": 11094568, "num_examples": 1355}, {"name": "JadeOpendataTest", "num_bytes": 56977150, "num_examples": 5586}, {"name": "DoleOpendataTest", "num_bytes": 2065780, "num_examples": 100}, {"name": "SardeOpendataTest", "num_bytes": 1044391, "num_examples": 2244}, {"name": "QrOpendataTest", "num_bytes": 18924359, "num_examples": 100}, {"name": "JorfOpendataTest", "num_bytes": 11892298, "num_examples": 10000}, {"name": "IncaOpendataTest", "num_bytes": 27827026, "num_examples": 3737}, {"name": "AccoOpendataTest", "num_bytes": 36928857, "num_examples": 2541}, {"name": "KaliOpendataTest", "num_bytes": 7740933, "num_examples": 4306}, {"name": "DebatsOpendataTest", "num_bytes": 38200789, "num_examples": 100}, {"name": "CnilOpendataTest", "num_bytes": 1495015, "num_examples": 181}, {"name": "CappOpendataTest", "num_bytes": 9680857, "num_examples": 727}, {"name": "CassOpendataTest", "num_bytes": 8283986, "num_examples": 1422}, {"name": "ConstitOpendataTest", "num_bytes": 1340350, "num_examples": 100}, {"name": "IlluinLayoutDatasetTextOnlyTest", "num_bytes": 11714355, "num_examples": 4885}, {"name": "WikisourceFrTest", "num_bytes": 44358940, "num_examples": 10000}, {"name": "Wikipedia20220301.frTest", "num_bytes": 28814742, "num_examples": 10000}, {"name": "Oscar2301FrTest", "num_bytes": 51030875, "num_examples": 9834}], "download_size": 0, "dataset_size": 483041948}}
|
2023-10-16T04:22:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "french-30b_separate"
More Information needed
|
[
"# Dataset Card for \"french-30b_separate\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"french-30b_separate\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"french-30b_separate\"\n\nMore Information needed"
] |
cd44d7429d00dcdf082c7083b8f0987c34df46d5
|
# iruca-1k: Lazy Llama 2 Formatting
This is a subset (1000 samples) of the excellent [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset, processed to match Llama 2's prompt format as described [in this article](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). It was created using the following [colab notebook](https://colab.research.google.com/drive/1Ad7a9zMmkxuXTOh1Z7-rNSICA4dybpM2?usp=sharing).
Useful if you don't want to reformat it by yourself (e.g., using a script). It was designed for [this article](https://mlabonne.github.io/blog/posts/Fine_Tune_Your_Own_Llama_2_Model_in_a_Colab_Notebook.html) about fine-tuning a Llama 2 (chat) model in a Google Colab.
### Format from xlsx file to CSV
```bash
pip install openpyxl pandas
python generate.py
pip install huggingface_hub
huggingface-cli repo create iruca_llama2_japanese_demo --type dataset
git clone https://huggingface.co/datasets/xinqiyang/iruca_llama2_japanese_demo
```
|
xinqiyang/iruca_llama2_japanese_demo
|
[
"region:us"
] |
2023-10-12T04:05:11+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24485.34975369458, "num_examples": 15}], "download_size": 3242, "dataset_size": 24485.34975369458}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T05:47:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# iruca-1k: Lazy Llama 2 Formatting
This is a subset (1000 samples) of the excellent 'timdettmers/openassistant-guanaco' dataset, processed to match Llama 2's prompt format as described in this article. It was created using the following colab notebook.
Useful if you don't want to reformat it by yourself (e.g., using a script). It was designed for this article about fine-tuning a Llama 2 (chat) model in a Google Colab.
### Format from xlsx file to CSV
|
[
"# iruca-1k: Lazy Llama 2 Formatting\n\nThis is a subset (1000 samples) of the excellent 'timdettmers/openassistant-guanaco' dataset, processed to match Llama 2's prompt format as described in this article. It was created using the following colab notebook.\n\nUseful if you don't want to reformat it by yourself (e.g., using a script). It was designed for this article about fine-tuning a Llama 2 (chat) model in a Google Colab.",
"### Format from xlsx file to CSV"
] |
[
"TAGS\n#region-us \n",
"# iruca-1k: Lazy Llama 2 Formatting\n\nThis is a subset (1000 samples) of the excellent 'timdettmers/openassistant-guanaco' dataset, processed to match Llama 2's prompt format as described in this article. It was created using the following colab notebook.\n\nUseful if you don't want to reformat it by yourself (e.g., using a script). It was designed for this article about fine-tuning a Llama 2 (chat) model in a Google Colab.",
"### Format from xlsx file to CSV"
] |
[
6,
119,
11
] |
[
"passage: TAGS\n#region-us \n# iruca-1k: Lazy Llama 2 Formatting\n\nThis is a subset (1000 samples) of the excellent 'timdettmers/openassistant-guanaco' dataset, processed to match Llama 2's prompt format as described in this article. It was created using the following colab notebook.\n\nUseful if you don't want to reformat it by yourself (e.g., using a script). It was designed for this article about fine-tuning a Llama 2 (chat) model in a Google Colab.### Format from xlsx file to CSV"
] |
2d7b550aa43c3487d42baaa522033d2c3f1b71a7
|
# Dataset Card for "chart2text_pew"
original dataset: https://github.com/vis-nlp/Chart-to-text
|
heegyu/chart2text_pew
|
[
"size_categories:1K<n<10K",
"language:en",
"license:gpl-3.0",
"region:us"
] |
2023-10-12T04:06:05+00:00
|
{"language": ["en"], "license": "gpl-3.0", "size_categories": ["1K<n<10K"], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "old_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "imgPath", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "dataPath", "dtype": "string"}, {"name": "chartType", "dtype": "string"}, {"name": "complexity", "dtype": "string"}, {"name": "topic", "dtype": "string"}, {"name": "bboxesPath", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "data", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 280009472, "num_examples": 6500}, {"name": "val", "num_bytes": 62717503.096, "num_examples": 1392}, {"name": "test", "num_bytes": 61265523.23, "num_examples": 1393}], "download_size": 400276057, "dataset_size": 403992498.32600003}}
|
2023-10-12T04:08:48+00:00
|
[] |
[
"en"
] |
TAGS
#size_categories-1K<n<10K #language-English #license-gpl-3.0 #region-us
|
# Dataset Card for "chart2text_pew"
original dataset: URL
|
[
"# Dataset Card for \"chart2text_pew\"\n\noriginal dataset: URL"
] |
[
"TAGS\n#size_categories-1K<n<10K #language-English #license-gpl-3.0 #region-us \n",
"# Dataset Card for \"chart2text_pew\"\n\noriginal dataset: URL"
] |
[
30,
18
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #language-English #license-gpl-3.0 #region-us \n# Dataset Card for \"chart2text_pew\"\n\noriginal dataset: URL"
] |
012b39854068c418433fbc3fca4f7df7a5886f2f
|
This is based on ultrachat dataset https://huggingface.co/datasets/stingning/ultrachat
I filtered it using the classic "unfiltered" keywords list https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered to remove instances of refusals and bias
About 90% of the dataset was removed.
What remains (400k conversations) is unlikely to inclinate the model to refuse.
I am investigating a less heavy handed approach using dolphin-2.1 to reword any detected refusals.
|
cognitivecomputations/ultrachat-uncensored
|
[
"license:mit",
"region:us"
] |
2023-10-12T04:25:04+00:00
|
{"license": "mit"}
|
2023-10-23T04:29:16+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
This is based on ultrachat dataset URL
I filtered it using the classic "unfiltered" keywords list URL to remove instances of refusals and bias
About 90% of the dataset was removed.
What remains (400k conversations) is unlikely to inclinate the model to refuse.
I am investigating a less heavy handed approach using dolphin-2.1 to reword any detected refusals.
|
[] |
[
"TAGS\n#license-mit #region-us \n"
] |
[
11
] |
[
"passage: TAGS\n#license-mit #region-us \n"
] |
c7d299fb564d4bb9f61604a43b6f138e441d7335
|
# Dataset Card for "code_20b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
manu/code_20b
|
[
"region:us"
] |
2023-10-12T04:37:03+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "dataset_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 66209111592, "num_examples": 11692337}, {"name": "test", "num_bytes": 276152957, "num_examples": 48689}], "download_size": 0, "dataset_size": 66485264549}}
|
2023-10-16T04:13:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "code_20b"
More Information needed
|
[
"# Dataset Card for \"code_20b\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"code_20b\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"code_20b\"\n\nMore Information needed"
] |
9c066bd7ed63c6e7ff157601e5c91b507d90ea0d
|
The dataset is generated by the interaction between the user simulator `Socratic` and `GPT-3.5-turbo`, including `50,728` samples.
For more details, please see the following link: https://github.com/FreedomIntelligence/PlatoLM
|
FreedomIntelligence/SocraticChat
|
[
"license:apache-2.0",
"region:us"
] |
2023-10-12T04:53:51+00:00
|
{"license": "apache-2.0"}
|
2023-10-12T05:10:36+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
The dataset is generated by the interaction between the user simulator 'Socratic' and 'GPT-3.5-turbo', including '50,728' samples.
For more details, please see the following link: URL
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
431467840cedb4bc37361173280ba1e0af41c7dd
|
# Dataset Card for "code_20b_separate"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
manu/code_20b_separate
|
[
"region:us"
] |
2023-10-12T05:07:10+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "StarcoderdataPythonTest", "path": "data/StarcoderdataPythonTest-*"}, {"split": "StarcoderdataMarkdownTest", "path": "data/StarcoderdataMarkdownTest-*"}, {"split": "StarcoderdataJupyterScriptsDedupFilteredTest", "path": "data/StarcoderdataJupyterScriptsDedupFilteredTest-*"}, {"split": "StarcoderdataJupyterStructuredCleanDedupTest", "path": "data/StarcoderdataJupyterStructuredCleanDedupTest-*"}, {"split": "StarcoderdataJsonTest", "path": "data/StarcoderdataJsonTest-*"}, {"split": "CodeContestsTest", "path": "data/CodeContestsTest-*"}, {"split": "PypiCleanTest", "path": "data/PypiCleanTest-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "dataset_id", "dtype": "string"}], "splits": [{"name": "StarcoderdataPythonTest", "num_bytes": 45900630, "num_examples": 10000}, {"name": "StarcoderdataMarkdownTest", "num_bytes": 40927519, "num_examples": 10000}, {"name": "StarcoderdataJupyterScriptsDedupFilteredTest", "num_bytes": 15297731, "num_examples": 1829}, {"name": "StarcoderdataJupyterStructuredCleanDedupTest", "num_bytes": 12631734, "num_examples": 1337}, {"name": "StarcoderdataJsonTest", "num_bytes": 8853154, "num_examples": 7127}, {"name": "CodeContestsTest", "num_bytes": 28120884, "num_examples": 8396}, {"name": "PypiCleanTest", "num_bytes": 124421305, "num_examples": 10000}], "download_size": 0, "dataset_size": 276152957}}
|
2023-10-16T04:13:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "code_20b_separate"
More Information needed
|
[
"# Dataset Card for \"code_20b_separate\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"code_20b_separate\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"code_20b_separate\"\n\nMore Information needed"
] |
8251ecf44e772b45bcc1bbd7bcedb7416284c45e
|
# Dataset Card for "chart2text_statista"
original dataset: https://github.com/vis-nlp/Chart-to-text
|
heegyu/chart2text_statista
|
[
"region:us"
] |
2023-10-12T05:42:42+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "dataPath", "dtype": "string"}, {"name": "imgPath", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "first_caption", "dtype": "string"}, {"name": "chartType", "dtype": "string"}, {"name": "release date", "dtype": "string"}, {"name": "Region", "dtype": "string"}, {"name": "survey time period", "dtype": "string"}, {"name": "xAxis", "dtype": "string"}, {"name": "yAxis", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "data", "dtype": "string"}, {"name": "columns", "dtype": "string"}, {"name": "length", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 1034457048.216, "num_examples": 24368}, {"name": "val", "num_bytes": 223876316.638, "num_examples": 5221}, {"name": "test", "num_bytes": 224020677.682, "num_examples": 5222}], "download_size": 763065167, "dataset_size": 1482354042.536}}
|
2023-10-12T06:02:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chart2text_statista"
original dataset: URL
|
[
"# Dataset Card for \"chart2text_statista\"\n\noriginal dataset: URL"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chart2text_statista\"\n\noriginal dataset: URL"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chart2text_statista\"\n\noriginal dataset: URL"
] |
a1ed713a0a12a74c1d2ff2117571ae3c744f6826
|
# Dataset Card for "hf-stack-peft"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
smangrul/hf-stack-peft
|
[
"region:us"
] |
2023-10-12T05:43:27+00:00
|
{"dataset_info": {"features": [{"name": "repo_id", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1280407, "num_examples": 158}], "download_size": 424682, "dataset_size": 1280407}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T05:43:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "hf-stack-peft"
More Information needed
|
[
"# Dataset Card for \"hf-stack-peft\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"hf-stack-peft\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"hf-stack-peft\"\n\nMore Information needed"
] |
92cd57fb237d2644879a4b5d96ad4283c8330a52
|
# Dataset Card for Kendal
<!-- Provide a quick summary of the dataset. -->
This is a dataset of for Kendal Bot.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]
|
Om007/kendal_bot
|
[
"task_categories:question-answering",
"language:en",
"region:us"
] |
2023-10-12T05:47:35+00:00
|
{"language": ["en"], "task_categories": ["question-answering"]}
|
2023-10-12T05:59:42+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-question-answering #language-English #region-us
|
# Dataset Card for Kendal
This is a dataset of for Kendal Bot.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
|
[
"# Dataset Card for Kendal\n\n\n\nThis is a dataset of for Kendal Bot.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
[
"TAGS\n#task_categories-question-answering #language-English #region-us \n",
"# Dataset Card for Kendal\n\n\n\nThis is a dataset of for Kendal Bot.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
[
22,
18,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] |
[
"passage: TAGS\n#task_categories-question-answering #language-English #region-us \n# Dataset Card for Kendal\n\n\n\nThis is a dataset of for Kendal Bot.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
a56ec9a0295bdfa552249108b4aa2d8147cb3d20
|
## model_setting_name: platy
## max_context_length: 512
## subset: 1.0
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 1000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## prompt 00 (basic prompts)
|
ostapeno/platy_icl5_subset1.0_maxD1000_3
|
[
"region:us"
] |
2023-10-12T06:21:54+00:00
|
{}
|
2023-10-12T18:50:03+00:00
|
[] |
[] |
TAGS
#region-us
|
## model_setting_name: platy
## max_context_length: 512
## subset: 1.0
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 1000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## prompt 00 (basic prompts)
|
[
"## model_setting_name: platy",
"## max_context_length: 512",
"## subset: 1.0",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 1000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## prompt 00 (basic prompts)"
] |
[
"TAGS\n#region-us \n",
"## model_setting_name: platy",
"## max_context_length: 512",
"## subset: 1.0",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 1000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## prompt 00 (basic prompts)"
] |
[
6,
9,
10,
5,
9,
14,
12,
12,
27,
7,
9
] |
[
"passage: TAGS\n#region-us \n## model_setting_name: platy## max_context_length: 512## subset: 1.0## icl_examples: 5## icl_dataset_name: lukaemon/mmlu## max_documents_per_subject: 1000## icl_use_out_options: True## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all## subjects: SUB_10## prompt 00 (basic prompts)"
] |
8ee23eb120f1bbde70086463fa5e62e804f13017
|
# Dataset Card for Dataset Name
This dataset is truncated
|
arthurdubrou/Bird_explained_corrections
|
[
"license:apache-2.0",
"region:us"
] |
2023-10-12T06:37:14+00:00
|
{"license": "apache-2.0"}
|
2023-10-13T14:26:32+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
# Dataset Card for Dataset Name
This dataset is truncated
|
[
"# Dataset Card for Dataset Name\n\nThis dataset is truncated"
] |
[
"TAGS\n#license-apache-2.0 #region-us \n",
"# Dataset Card for Dataset Name\n\nThis dataset is truncated"
] |
[
14,
15
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n# Dataset Card for Dataset Name\n\nThis dataset is truncated"
] |
f2a15374dae8c90395f461c5ba0546c034751665
|
The Exclusively Dark (ExDARK) dataset is a collection of low-light
images from very low-light environments to twilight (i.e 10 different
conditions) with 12 object classes (similar to PASCAL VOC) annotated on both
image class level and local object bounding boxes.
The object classes are as follows:
- Dog
- Motorbike
- People
- Cat
- Chair
- Table
- Car
- Bicycle
- Bottle
- Bus
- Cup
- Boat
For more information about the original Exclusively Dark Image dataset,
please visit the official dataset page:
[https://github.com/cs-chan/Exclusively-Dark-Image-Dataset](https://github.com/cs-chan/Exclusively-Dark-Image-Dataset)
Please refer to the original dataset source for any additional details,
citations, or specific usage guidelines provided by the dataset creators.
|
SatwikKambham/ex-dark
|
[
"license:bsd-3-clause",
"region:us"
] |
2023-10-12T06:54:37+00:00
|
{"license": "bsd-3-clause", "dataset_info": {"config_name": "exdark", "features": [{"name": "img", "dtype": "image"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "Dog", "1": "Motorbike", "2": "People", "3": "Cat", "4": "Chair", "5": "Table", "6": "Car", "7": "Bicycle", "8": "Bottle", "9": "Bus", "10": "Cup", "11": "Boat"}}}}, {"name": "bboxes", "sequence": {"sequence": "float32", "length": 4}}], "splits": [{"name": "train", "num_bytes": 1770065, "num_examples": 7361}], "download_size": 1487935964, "dataset_size": 1770065}}
|
2023-10-13T09:58:40+00:00
|
[] |
[] |
TAGS
#license-bsd-3-clause #region-us
|
The Exclusively Dark (ExDARK) dataset is a collection of low-light
images from very low-light environments to twilight (i.e 10 different
conditions) with 12 object classes (similar to PASCAL VOC) annotated on both
image class level and local object bounding boxes.
The object classes are as follows:
- Dog
- Motorbike
- People
- Cat
- Chair
- Table
- Car
- Bicycle
- Bottle
- Bus
- Cup
- Boat
For more information about the original Exclusively Dark Image dataset,
please visit the official dataset page:
URL
Please refer to the original dataset source for any additional details,
citations, or specific usage guidelines provided by the dataset creators.
|
[] |
[
"TAGS\n#license-bsd-3-clause #region-us \n"
] |
[
16
] |
[
"passage: TAGS\n#license-bsd-3-clause #region-us \n"
] |
2dbc6fe262c01ec1ecc8cbd99435ade85cf5bfbe
|
# Dataset Card for "uit_data_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pphuc25/uit_data_train
|
[
"region:us"
] |
2023-10-12T07:06:36+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "claim", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "evidence", "dtype": "string"}, {"name": "domain", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 109989912, "num_examples": 26967}], "download_size": 21040532, "dataset_size": 109989912}}
|
2023-10-15T07:49:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "uit_data_train"
More Information needed
|
[
"# Dataset Card for \"uit_data_train\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"uit_data_train\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"uit_data_train\"\n\nMore Information needed"
] |
677cfef0992089a1e9a6d3bb65d0706694679de5
|
# Dataset Card for Evaluation run of teknium/CollectiveCognition-v1.1-Mistral-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [teknium/CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 5 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_teknium__CollectiveCognition-v1.1-Mistral-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T17:47:55.890655](https://huggingface.co/datasets/open-llm-leaderboard/details_teknium__CollectiveCognition-v1.1-Mistral-7B/blob/main/results_2023-12-03T17-47-55.890655.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.35860500379075055,
"acc_stderr": 0.01321031736413403
},
"harness|gsm8k|5": {
"acc": 0.35860500379075055,
"acc_stderr": 0.01321031736413403
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_teknium__CollectiveCognition-v1.1-Mistral-7B
|
[
"region:us"
] |
2023-10-12T07:33:46+00:00
|
{"pretty_name": "Evaluation run of teknium/CollectiveCognition-v1.1-Mistral-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [teknium/CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 5 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_teknium__CollectiveCognition-v1.1-Mistral-7B\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T17:47:55.890655](https://huggingface.co/datasets/open-llm-leaderboard/details_teknium__CollectiveCognition-v1.1-Mistral-7B/blob/main/results_2023-12-03T17-47-55.890655.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.35860500379075055,\n \"acc_stderr\": 0.01321031736413403\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.35860500379075055,\n \"acc_stderr\": 0.01321031736413403\n }\n}\n```", "repo_url": "https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|arc:challenge|25_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|arc:challenge|25_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T18_24_08.168024", "path": ["**/details_harness|drop|3_2023-10-24T18-24-08.168024.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T18-24-08.168024.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T18_24_08.168024", "path": ["**/details_harness|gsm8k|5_2023-10-24T18-24-08.168024.parquet"]}, {"split": "2023_12_03T17_43_05.326590", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-43-05.326590.parquet"]}, {"split": "2023_12_03T17_47_55.890655", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-47-55.890655.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T17-47-55.890655.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hellaswag|10_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hellaswag|10_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-12T08-33-23.557832.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-08T13-48-47.550072.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-08T13-48-47.550072.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-08T13-48-47.550072.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T18_24_08.168024", "path": ["**/details_harness|winogrande|5_2023-10-24T18-24-08.168024.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T18-24-08.168024.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_12T08_33_23.557832", "path": ["results_2023-10-12T08-33-23.557832.parquet"]}, {"split": "2023_10_24T18_24_08.168024", "path": ["results_2023-10-24T18-24-08.168024.parquet"]}, {"split": "2023_11_08T13_48_47.550072", "path": ["results_2023-11-08T13-48-47.550072.parquet"]}, {"split": "2023_12_03T17_43_05.326590", "path": ["results_2023-12-03T17-43-05.326590.parquet"]}, {"split": "2023_12_03T17_47_55.890655", "path": ["results_2023-12-03T17-47-55.890655.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T17-47-55.890655.parquet"]}]}]}
|
2023-12-03T17:48:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of teknium/CollectiveCognition-v1.1-Mistral-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model teknium/CollectiveCognition-v1.1-Mistral-7B on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 5 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T17:47:55.890655(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of teknium/CollectiveCognition-v1.1-Mistral-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model teknium/CollectiveCognition-v1.1-Mistral-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 5 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T17:47:55.890655(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of teknium/CollectiveCognition-v1.1-Mistral-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model teknium/CollectiveCognition-v1.1-Mistral-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 5 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T17:47:55.890655(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
26,
31,
175,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of teknium/CollectiveCognition-v1.1-Mistral-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model teknium/CollectiveCognition-v1.1-Mistral-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 5 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T17:47:55.890655(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
ed36e89d80dd0c9b73bd091417560db9b5ca8fff
|
# Dataset Card for Evaluation run of teknium/CollectiveCognition-v1-Mistral-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/teknium/CollectiveCognition-v1-Mistral-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [teknium/CollectiveCognition-v1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1-Mistral-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_teknium__CollectiveCognition-v1-Mistral-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-29T01:40:21.634950](https://huggingface.co/datasets/open-llm-leaderboard/details_teknium__CollectiveCognition-v1-Mistral-7B/blob/main/results_2023-10-29T01-40-21.634950.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.014786073825503355,
"em_stderr": 0.0012360366760473097,
"f1": 0.07218645134228192,
"f1_stderr": 0.0017555798787673934,
"acc": 0.47738594388492395,
"acc_stderr": 0.011139031066837696
},
"harness|drop|3": {
"em": 0.014786073825503355,
"em_stderr": 0.0012360366760473097,
"f1": 0.07218645134228192,
"f1_stderr": 0.0017555798787673934
},
"harness|gsm8k|5": {
"acc": 0.17892342683851403,
"acc_stderr": 0.010557661392901294
},
"harness|winogrande|5": {
"acc": 0.7758484609313339,
"acc_stderr": 0.011720400740774099
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_teknium__CollectiveCognition-v1-Mistral-7B
|
[
"region:us"
] |
2023-10-12T07:39:42+00:00
|
{"pretty_name": "Evaluation run of teknium/CollectiveCognition-v1-Mistral-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [teknium/CollectiveCognition-v1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1-Mistral-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_teknium__CollectiveCognition-v1-Mistral-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-29T01:40:21.634950](https://huggingface.co/datasets/open-llm-leaderboard/details_teknium__CollectiveCognition-v1-Mistral-7B/blob/main/results_2023-10-29T01-40-21.634950.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.014786073825503355,\n \"em_stderr\": 0.0012360366760473097,\n \"f1\": 0.07218645134228192,\n \"f1_stderr\": 0.0017555798787673934,\n \"acc\": 0.47738594388492395,\n \"acc_stderr\": 0.011139031066837696\n },\n \"harness|drop|3\": {\n \"em\": 0.014786073825503355,\n \"em_stderr\": 0.0012360366760473097,\n \"f1\": 0.07218645134228192,\n \"f1_stderr\": 0.0017555798787673934\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.17892342683851403,\n \"acc_stderr\": 0.010557661392901294\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7758484609313339,\n \"acc_stderr\": 0.011720400740774099\n }\n}\n```", "repo_url": "https://huggingface.co/teknium/CollectiveCognition-v1-Mistral-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|arc:challenge|25_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_29T01_40_21.634950", "path": ["**/details_harness|drop|3_2023-10-29T01-40-21.634950.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-29T01-40-21.634950.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_29T01_40_21.634950", "path": ["**/details_harness|gsm8k|5_2023-10-29T01-40-21.634950.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-29T01-40-21.634950.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hellaswag|10_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-12T08-39-18.628472.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-12T08-39-18.628472.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-12T08-39-18.628472.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_29T01_40_21.634950", "path": ["**/details_harness|winogrande|5_2023-10-29T01-40-21.634950.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-29T01-40-21.634950.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_12T08_39_18.628472", "path": ["results_2023-10-12T08-39-18.628472.parquet"]}, {"split": "2023_10_29T01_40_21.634950", "path": ["results_2023-10-29T01-40-21.634950.parquet"]}, {"split": "latest", "path": ["results_2023-10-29T01-40-21.634950.parquet"]}]}]}
|
2023-10-29T00:40:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of teknium/CollectiveCognition-v1-Mistral-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model teknium/CollectiveCognition-v1-Mistral-7B on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-29T01:40:21.634950(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of teknium/CollectiveCognition-v1-Mistral-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model teknium/CollectiveCognition-v1-Mistral-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-29T01:40:21.634950(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of teknium/CollectiveCognition-v1-Mistral-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model teknium/CollectiveCognition-v1-Mistral-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-29T01:40:21.634950(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
25,
31,
173,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of teknium/CollectiveCognition-v1-Mistral-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model teknium/CollectiveCognition-v1-Mistral-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-29T01:40:21.634950(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
4205fc2e2e478c6e4735f18838c9fd920a8fedff
|
# Dataset Card for Evaluation run of teknium/Mistral-Trismegistus-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/teknium/Mistral-Trismegistus-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [teknium/Mistral-Trismegistus-7B](https://huggingface.co/teknium/Mistral-Trismegistus-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_teknium__Mistral-Trismegistus-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T09:46:08.723071](https://huggingface.co/datasets/open-llm-leaderboard/details_teknium__Mistral-Trismegistus-7B/blob/main/results_2023-10-25T09-46-08.723071.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.010591442953020135,
"em_stderr": 0.0010483469790502314,
"f1": 0.07238674496644287,
"f1_stderr": 0.001675223530701393,
"acc": 0.4004875617305928,
"acc_stderr": 0.010548628211357203
},
"harness|drop|3": {
"em": 0.010591442953020135,
"em_stderr": 0.0010483469790502314,
"f1": 0.07238674496644287,
"f1_stderr": 0.001675223530701393
},
"harness|gsm8k|5": {
"acc": 0.09931766489764973,
"acc_stderr": 0.008238371412683985
},
"harness|winogrande|5": {
"acc": 0.7016574585635359,
"acc_stderr": 0.012858885010030421
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_teknium__Mistral-Trismegistus-7B
|
[
"region:us"
] |
2023-10-12T07:45:48+00:00
|
{"pretty_name": "Evaluation run of teknium/Mistral-Trismegistus-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [teknium/Mistral-Trismegistus-7B](https://huggingface.co/teknium/Mistral-Trismegistus-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_teknium__Mistral-Trismegistus-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T09:46:08.723071](https://huggingface.co/datasets/open-llm-leaderboard/details_teknium__Mistral-Trismegistus-7B/blob/main/results_2023-10-25T09-46-08.723071.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.010591442953020135,\n \"em_stderr\": 0.0010483469790502314,\n \"f1\": 0.07238674496644287,\n \"f1_stderr\": 0.001675223530701393,\n \"acc\": 0.4004875617305928,\n \"acc_stderr\": 0.010548628211357203\n },\n \"harness|drop|3\": {\n \"em\": 0.010591442953020135,\n \"em_stderr\": 0.0010483469790502314,\n \"f1\": 0.07238674496644287,\n \"f1_stderr\": 0.001675223530701393\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09931766489764973,\n \"acc_stderr\": 0.008238371412683985\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7016574585635359,\n \"acc_stderr\": 0.012858885010030421\n }\n}\n```", "repo_url": "https://huggingface.co/teknium/Mistral-Trismegistus-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|arc:challenge|25_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T09_46_08.723071", "path": ["**/details_harness|drop|3_2023-10-25T09-46-08.723071.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T09-46-08.723071.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T09_46_08.723071", "path": ["**/details_harness|gsm8k|5_2023-10-25T09-46-08.723071.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T09-46-08.723071.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hellaswag|10_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-12T08-45-24.509522.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-12T08-45-24.509522.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-12T08-45-24.509522.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T09_46_08.723071", "path": ["**/details_harness|winogrande|5_2023-10-25T09-46-08.723071.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T09-46-08.723071.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_12T08_45_24.509522", "path": ["results_2023-10-12T08-45-24.509522.parquet"]}, {"split": "2023_10_25T09_46_08.723071", "path": ["results_2023-10-25T09-46-08.723071.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T09-46-08.723071.parquet"]}]}]}
|
2023-10-25T08:46:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of teknium/Mistral-Trismegistus-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model teknium/Mistral-Trismegistus-7B on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T09:46:08.723071(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of teknium/Mistral-Trismegistus-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model teknium/Mistral-Trismegistus-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T09:46:08.723071(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of teknium/Mistral-Trismegistus-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model teknium/Mistral-Trismegistus-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T09:46:08.723071(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
21,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of teknium/Mistral-Trismegistus-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model teknium/Mistral-Trismegistus-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T09:46:08.723071(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
517dd6dee7ab7fd9ec0a3f0d100711ddbe9757cf
|
## Benchmark
**German Benchmarks on Hugging Face**
At present, there is a notable scarcity, if not a complete **absence, of reliable and true German benchmarks** designed to evaluate the capabilities of German Language Models (LLMs). While some efforts have been made to translate English benchmarks into German, these attempts often **fall short in terms of precision, accuracy, and context sensitivity, even when employing GPT-4 technology**. Take, for instance, the **MT-Bench**, a widely recognized and frequently used benchmark for assessing LLM performance in real-world scenarios. The seemingly straightforward and cost-effective approach of **translating MT-Bench into German using GPT-4 proves to be counterproductive**, resulting in subpar outcomes that hinder a realistic and contextually appropriate evaluation of German LLMs. To illustrate this, we offer a few examples extracted from translated MT-Bench versions available on Hugging Face.
**Example: Uncommon use of words**
*{ "category": "writing", "turns": [ "Schreibe eine überzeugende E-Mail, um deinen introvertierten Freund, der öffentliches Sprechen nicht mag, dazu zu bringen, sich als Gastredner bei einer lokalen Veranstaltung zu engagieren. Verwende überzeugende Argumente und gehe auf mögliche Einwände ein. Bitte sei prägnant.", "Kannst du deine vorherige Antwort umformulieren und in jedem Satz eine Metapher oder ein **Gleichnis** einbauen?" ] }*
What you can see here is an example of a German word, someone would not use in a real conversation (marked in bold). In a real conversation someone would rather use “Vergleich” instead of “Gleichnis”.
**Example: Wrong context**
*{ "category": "roleplay", "turns": [ "Bitte nehmen Sie die Rolle eines englischen Übersetzers an, der damit beauftragt ist, Rechtschreibung und Sprache zu korrigieren und zu verbessern. Unabhängig von der Sprache, die ich verwende, sollten Sie sie identifizieren, übersetzen und mit einer verfeinerten und polierten Version meines Textes **auf Englisch antworten**.*
Here we get a request to translate a given sentence in English language and phrase a more sophisticated sentence compared to the original sentence. As we aim to assess a German LLM requesting the model to translate a sentence in English language would be pointless.
**Example: Wrong content**
*{"category": "writing", "turns": [ "Bearbeite den folgenden Absatz, um etwaige grammatikalische Fehler zu korrigieren: ***Sie erinnerte sich nicht daran, wo ihre Geldbörse ist, also denke ich, dass sie im Auto ist, aber er sagt, dass sie auf dem Küchentisch ist, aber er ist sich nicht sicher, und dann haben sie mich gebeten, danach zu suchen, sie sagt: "Kannst du?", und ich antworte: "Vielleicht, aber ich bin nicht sicher", und er hat mich nicht gehört, und er fragt: "Was?", "Hast du es gefunden?"***.", "Ändere deine frühere Antwort und vermeide die Verwendung von geschlechtsspezifischen Pronomen." ]}*
The task here is to edit a sentence full of grammatical errors and correct them. The problem with this translated version of the MT-bench is that the sentence was already corrected by GPT4 during translation. So now the model is requested to correct a sentence that has no more grammatical errors.
**Example: Pointless translation of anglicisms**
*{ "category": "roleplay", "turns": [ "Jetzt bist du ein **Maschinenlern-Ingenieur**. Deine Aufgabe besteht darin, komplexe Maschinenlernkonzepte auf einfache Weise zu erklären, damit Kunden ohne technischen Hintergrund deine Produkte verstehen und ihnen vertrauen können. Fangen wir an mit der Frage: Was ist ein Sprachmodell? Wird es mit gelabelten oder ungelabelten Daten trainiert?, "Ist das wahr? Ich habe gehört, dass andere Unternehmen unterschiedliche Ansätze verwenden, um dies zu tun und es sicherer zu machen.]}*
As we can see here, the GPT4 translation of this dataset lead to a term that no one would use when speaking German. Instead someone would rather use the original English term “Machine Learning Engineer” or the properly translated term “Ingenieur für maschinelles Lernen”.
**Our approach to a German Benchmark**
So, what we did instead of simply translating the MT-Bench with GPT4, we applied a mixed approach of automatic translation and human evaluation. In a first step we translated the complete MT-Bench into German language by using GPT4. In a second step we conducted a thorough manual evaluation of each translated dataset to ensure following quality criteria:
- The dataset has been translated into German language.
- The German translation consists of an appropriate and genuine wording.
- the context of the translated dataset is meaningful and reasonable for assessing German language skills of the model.
- the content of the translated dataset is still reasonable after translation.
Although this method is undeniably time-consuming, it enables us to create a substantive benchmark for evaluating the model's proficiency in completing various benchmark categories. Nonetheless, it is important to acknowledge that even with this meticulous approach, a truly flawless benchmark remains elusive, as minor oversights may still occur due to human errors.
Nevertheless, when we compare the current approaches of German Language Model teams available on Hugging Face, we may assume that our German MT-Bench, as of today, stands as the most precise and practical benchmark for assessing German LLMs. Consequently, the benchmark scores we present offer a realistic evaluation of the models performance in German language.
|
VAGOsolutions/MT-Bench-TrueGerman
|
[
"language:de",
"region:us"
] |
2023-10-12T08:00:45+00:00
|
{"language": ["de"]}
|
2023-10-12T09:07:55+00:00
|
[] |
[
"de"
] |
TAGS
#language-German #region-us
|
## Benchmark
German Benchmarks on Hugging Face
At present, there is a notable scarcity, if not a complete absence, of reliable and true German benchmarks designed to evaluate the capabilities of German Language Models (LLMs). While some efforts have been made to translate English benchmarks into German, these attempts often fall short in terms of precision, accuracy, and context sensitivity, even when employing GPT-4 technology. Take, for instance, the MT-Bench, a widely recognized and frequently used benchmark for assessing LLM performance in real-world scenarios. The seemingly straightforward and cost-effective approach of translating MT-Bench into German using GPT-4 proves to be counterproductive, resulting in subpar outcomes that hinder a realistic and contextually appropriate evaluation of German LLMs. To illustrate this, we offer a few examples extracted from translated MT-Bench versions available on Hugging Face.
Example: Uncommon use of words
*{ "category": "writing", "turns": [ "Schreibe eine überzeugende E-Mail, um deinen introvertierten Freund, der öffentliches Sprechen nicht mag, dazu zu bringen, sich als Gastredner bei einer lokalen Veranstaltung zu engagieren. Verwende überzeugende Argumente und gehe auf mögliche Einwände ein. Bitte sei prägnant.", "Kannst du deine vorherige Antwort umformulieren und in jedem Satz eine Metapher oder ein Gleichnis einbauen?" ] }*
What you can see here is an example of a German word, someone would not use in a real conversation (marked in bold). In a real conversation someone would rather use “Vergleich” instead of “Gleichnis”.
Example: Wrong context
*{ "category": "roleplay", "turns": [ "Bitte nehmen Sie die Rolle eines englischen Übersetzers an, der damit beauftragt ist, Rechtschreibung und Sprache zu korrigieren und zu verbessern. Unabhängig von der Sprache, die ich verwende, sollten Sie sie identifizieren, übersetzen und mit einer verfeinerten und polierten Version meines Textes auf Englisch antworten.*
Here we get a request to translate a given sentence in English language and phrase a more sophisticated sentence compared to the original sentence. As we aim to assess a German LLM requesting the model to translate a sentence in English language would be pointless.
Example: Wrong content
*{"category": "writing", "turns": [ "Bearbeite den folgenden Absatz, um etwaige grammatikalische Fehler zu korrigieren: *Sie erinnerte sich nicht daran, wo ihre Geldbörse ist, also denke ich, dass sie im Auto ist, aber er sagt, dass sie auf dem Küchentisch ist, aber er ist sich nicht sicher, und dann haben sie mich gebeten, danach zu suchen, sie sagt: "Kannst du?", und ich antworte: "Vielleicht, aber ich bin nicht sicher", und er hat mich nicht gehört, und er fragt: "Was?", "Hast du es gefunden?"*.", "Ändere deine frühere Antwort und vermeide die Verwendung von geschlechtsspezifischen Pronomen." ]}*
The task here is to edit a sentence full of grammatical errors and correct them. The problem with this translated version of the MT-bench is that the sentence was already corrected by GPT4 during translation. So now the model is requested to correct a sentence that has no more grammatical errors.
Example: Pointless translation of anglicisms
*{ "category": "roleplay", "turns": [ "Jetzt bist du ein Maschinenlern-Ingenieur. Deine Aufgabe besteht darin, komplexe Maschinenlernkonzepte auf einfache Weise zu erklären, damit Kunden ohne technischen Hintergrund deine Produkte verstehen und ihnen vertrauen können. Fangen wir an mit der Frage: Was ist ein Sprachmodell? Wird es mit gelabelten oder ungelabelten Daten trainiert?, "Ist das wahr? Ich habe gehört, dass andere Unternehmen unterschiedliche Ansätze verwenden, um dies zu tun und es sicherer zu machen.]}*
As we can see here, the GPT4 translation of this dataset lead to a term that no one would use when speaking German. Instead someone would rather use the original English term “Machine Learning Engineer” or the properly translated term “Ingenieur für maschinelles Lernen”.
Our approach to a German Benchmark
So, what we did instead of simply translating the MT-Bench with GPT4, we applied a mixed approach of automatic translation and human evaluation. In a first step we translated the complete MT-Bench into German language by using GPT4. In a second step we conducted a thorough manual evaluation of each translated dataset to ensure following quality criteria:
- The dataset has been translated into German language.
- The German translation consists of an appropriate and genuine wording.
- the context of the translated dataset is meaningful and reasonable for assessing German language skills of the model.
- the content of the translated dataset is still reasonable after translation.
Although this method is undeniably time-consuming, it enables us to create a substantive benchmark for evaluating the model's proficiency in completing various benchmark categories. Nonetheless, it is important to acknowledge that even with this meticulous approach, a truly flawless benchmark remains elusive, as minor oversights may still occur due to human errors.
Nevertheless, when we compare the current approaches of German Language Model teams available on Hugging Face, we may assume that our German MT-Bench, as of today, stands as the most precise and practical benchmark for assessing German LLMs. Consequently, the benchmark scores we present offer a realistic evaluation of the models performance in German language.
|
[
"## Benchmark\n\nGerman Benchmarks on Hugging Face\n\nAt present, there is a notable scarcity, if not a complete absence, of reliable and true German benchmarks designed to evaluate the capabilities of German Language Models (LLMs). While some efforts have been made to translate English benchmarks into German, these attempts often fall short in terms of precision, accuracy, and context sensitivity, even when employing GPT-4 technology. Take, for instance, the MT-Bench, a widely recognized and frequently used benchmark for assessing LLM performance in real-world scenarios. The seemingly straightforward and cost-effective approach of translating MT-Bench into German using GPT-4 proves to be counterproductive, resulting in subpar outcomes that hinder a realistic and contextually appropriate evaluation of German LLMs. To illustrate this, we offer a few examples extracted from translated MT-Bench versions available on Hugging Face.\n\nExample: Uncommon use of words\n\n*{ \"category\": \"writing\", \"turns\": [ \"Schreibe eine überzeugende E-Mail, um deinen introvertierten Freund, der öffentliches Sprechen nicht mag, dazu zu bringen, sich als Gastredner bei einer lokalen Veranstaltung zu engagieren. Verwende überzeugende Argumente und gehe auf mögliche Einwände ein. Bitte sei prägnant.\", \"Kannst du deine vorherige Antwort umformulieren und in jedem Satz eine Metapher oder ein Gleichnis einbauen?\" ] }*\n\nWhat you can see here is an example of a German word, someone would not use in a real conversation (marked in bold). In a real conversation someone would rather use “Vergleich” instead of “Gleichnis”.\n\nExample: Wrong context\n\n*{ \"category\": \"roleplay\", \"turns\": [ \"Bitte nehmen Sie die Rolle eines englischen Übersetzers an, der damit beauftragt ist, Rechtschreibung und Sprache zu korrigieren und zu verbessern. Unabhängig von der Sprache, die ich verwende, sollten Sie sie identifizieren, übersetzen und mit einer verfeinerten und polierten Version meines Textes auf Englisch antworten.*\n\nHere we get a request to translate a given sentence in English language and phrase a more sophisticated sentence compared to the original sentence. As we aim to assess a German LLM requesting the model to translate a sentence in English language would be pointless.\n\nExample: Wrong content\n\n*{\"category\": \"writing\", \"turns\": [ \"Bearbeite den folgenden Absatz, um etwaige grammatikalische Fehler zu korrigieren: *Sie erinnerte sich nicht daran, wo ihre Geldbörse ist, also denke ich, dass sie im Auto ist, aber er sagt, dass sie auf dem Küchentisch ist, aber er ist sich nicht sicher, und dann haben sie mich gebeten, danach zu suchen, sie sagt: \"Kannst du?\", und ich antworte: \"Vielleicht, aber ich bin nicht sicher\", und er hat mich nicht gehört, und er fragt: \"Was?\", \"Hast du es gefunden?\"*.\", \"Ändere deine frühere Antwort und vermeide die Verwendung von geschlechtsspezifischen Pronomen.\" ]}*\n\nThe task here is to edit a sentence full of grammatical errors and correct them. The problem with this translated version of the MT-bench is that the sentence was already corrected by GPT4 during translation. So now the model is requested to correct a sentence that has no more grammatical errors.\n\nExample: Pointless translation of anglicisms\n\n*{ \"category\": \"roleplay\", \"turns\": [ \"Jetzt bist du ein Maschinenlern-Ingenieur. Deine Aufgabe besteht darin, komplexe Maschinenlernkonzepte auf einfache Weise zu erklären, damit Kunden ohne technischen Hintergrund deine Produkte verstehen und ihnen vertrauen können. Fangen wir an mit der Frage: Was ist ein Sprachmodell? Wird es mit gelabelten oder ungelabelten Daten trainiert?, \"Ist das wahr? Ich habe gehört, dass andere Unternehmen unterschiedliche Ansätze verwenden, um dies zu tun und es sicherer zu machen.]}*\n\nAs we can see here, the GPT4 translation of this dataset lead to a term that no one would use when speaking German. Instead someone would rather use the original English term “Machine Learning Engineer” or the properly translated term “Ingenieur für maschinelles Lernen”.\n\nOur approach to a German Benchmark\n\nSo, what we did instead of simply translating the MT-Bench with GPT4, we applied a mixed approach of automatic translation and human evaluation. In a first step we translated the complete MT-Bench into German language by using GPT4. In a second step we conducted a thorough manual evaluation of each translated dataset to ensure following quality criteria: \n\n- The dataset has been translated into German language.\n\n- The German translation consists of an appropriate and genuine wording.\n\n- the context of the translated dataset is meaningful and reasonable for assessing German language skills of the model.\n\n- the content of the translated dataset is still reasonable after translation.\n\nAlthough this method is undeniably time-consuming, it enables us to create a substantive benchmark for evaluating the model's proficiency in completing various benchmark categories. Nonetheless, it is important to acknowledge that even with this meticulous approach, a truly flawless benchmark remains elusive, as minor oversights may still occur due to human errors.\nNevertheless, when we compare the current approaches of German Language Model teams available on Hugging Face, we may assume that our German MT-Bench, as of today, stands as the most precise and practical benchmark for assessing German LLMs. Consequently, the benchmark scores we present offer a realistic evaluation of the models performance in German language."
] |
[
"TAGS\n#language-German #region-us \n",
"## Benchmark\n\nGerman Benchmarks on Hugging Face\n\nAt present, there is a notable scarcity, if not a complete absence, of reliable and true German benchmarks designed to evaluate the capabilities of German Language Models (LLMs). While some efforts have been made to translate English benchmarks into German, these attempts often fall short in terms of precision, accuracy, and context sensitivity, even when employing GPT-4 technology. Take, for instance, the MT-Bench, a widely recognized and frequently used benchmark for assessing LLM performance in real-world scenarios. The seemingly straightforward and cost-effective approach of translating MT-Bench into German using GPT-4 proves to be counterproductive, resulting in subpar outcomes that hinder a realistic and contextually appropriate evaluation of German LLMs. To illustrate this, we offer a few examples extracted from translated MT-Bench versions available on Hugging Face.\n\nExample: Uncommon use of words\n\n*{ \"category\": \"writing\", \"turns\": [ \"Schreibe eine überzeugende E-Mail, um deinen introvertierten Freund, der öffentliches Sprechen nicht mag, dazu zu bringen, sich als Gastredner bei einer lokalen Veranstaltung zu engagieren. Verwende überzeugende Argumente und gehe auf mögliche Einwände ein. Bitte sei prägnant.\", \"Kannst du deine vorherige Antwort umformulieren und in jedem Satz eine Metapher oder ein Gleichnis einbauen?\" ] }*\n\nWhat you can see here is an example of a German word, someone would not use in a real conversation (marked in bold). In a real conversation someone would rather use “Vergleich” instead of “Gleichnis”.\n\nExample: Wrong context\n\n*{ \"category\": \"roleplay\", \"turns\": [ \"Bitte nehmen Sie die Rolle eines englischen Übersetzers an, der damit beauftragt ist, Rechtschreibung und Sprache zu korrigieren und zu verbessern. Unabhängig von der Sprache, die ich verwende, sollten Sie sie identifizieren, übersetzen und mit einer verfeinerten und polierten Version meines Textes auf Englisch antworten.*\n\nHere we get a request to translate a given sentence in English language and phrase a more sophisticated sentence compared to the original sentence. As we aim to assess a German LLM requesting the model to translate a sentence in English language would be pointless.\n\nExample: Wrong content\n\n*{\"category\": \"writing\", \"turns\": [ \"Bearbeite den folgenden Absatz, um etwaige grammatikalische Fehler zu korrigieren: *Sie erinnerte sich nicht daran, wo ihre Geldbörse ist, also denke ich, dass sie im Auto ist, aber er sagt, dass sie auf dem Küchentisch ist, aber er ist sich nicht sicher, und dann haben sie mich gebeten, danach zu suchen, sie sagt: \"Kannst du?\", und ich antworte: \"Vielleicht, aber ich bin nicht sicher\", und er hat mich nicht gehört, und er fragt: \"Was?\", \"Hast du es gefunden?\"*.\", \"Ändere deine frühere Antwort und vermeide die Verwendung von geschlechtsspezifischen Pronomen.\" ]}*\n\nThe task here is to edit a sentence full of grammatical errors and correct them. The problem with this translated version of the MT-bench is that the sentence was already corrected by GPT4 during translation. So now the model is requested to correct a sentence that has no more grammatical errors.\n\nExample: Pointless translation of anglicisms\n\n*{ \"category\": \"roleplay\", \"turns\": [ \"Jetzt bist du ein Maschinenlern-Ingenieur. Deine Aufgabe besteht darin, komplexe Maschinenlernkonzepte auf einfache Weise zu erklären, damit Kunden ohne technischen Hintergrund deine Produkte verstehen und ihnen vertrauen können. Fangen wir an mit der Frage: Was ist ein Sprachmodell? Wird es mit gelabelten oder ungelabelten Daten trainiert?, \"Ist das wahr? Ich habe gehört, dass andere Unternehmen unterschiedliche Ansätze verwenden, um dies zu tun und es sicherer zu machen.]}*\n\nAs we can see here, the GPT4 translation of this dataset lead to a term that no one would use when speaking German. Instead someone would rather use the original English term “Machine Learning Engineer” or the properly translated term “Ingenieur für maschinelles Lernen”.\n\nOur approach to a German Benchmark\n\nSo, what we did instead of simply translating the MT-Bench with GPT4, we applied a mixed approach of automatic translation and human evaluation. In a first step we translated the complete MT-Bench into German language by using GPT4. In a second step we conducted a thorough manual evaluation of each translated dataset to ensure following quality criteria: \n\n- The dataset has been translated into German language.\n\n- The German translation consists of an appropriate and genuine wording.\n\n- the context of the translated dataset is meaningful and reasonable for assessing German language skills of the model.\n\n- the content of the translated dataset is still reasonable after translation.\n\nAlthough this method is undeniably time-consuming, it enables us to create a substantive benchmark for evaluating the model's proficiency in completing various benchmark categories. Nonetheless, it is important to acknowledge that even with this meticulous approach, a truly flawless benchmark remains elusive, as minor oversights may still occur due to human errors.\nNevertheless, when we compare the current approaches of German Language Model teams available on Hugging Face, we may assume that our German MT-Bench, as of today, stands as the most precise and practical benchmark for assessing German LLMs. Consequently, the benchmark scores we present offer a realistic evaluation of the models performance in German language."
] |
[
10,
1320
] |
[
"passage: TAGS\n#language-German #region-us \n"
] |
9c21c0506bf4ea9cb4dc33b925a18548c7484f42
|
# Dataset Card for "base_model_client_dataset_20231012_093051"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tr416/base_model_client_dataset_20231012_093051
|
[
"region:us"
] |
2023-10-12T08:30:51+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 75203880.0, "num_examples": 29285}, {"name": "test", "num_bytes": 760128.0, "num_examples": 296}], "download_size": 12773623, "dataset_size": 75964008.0}}
|
2023-10-12T08:30:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "base_model_client_dataset_20231012_093051"
More Information needed
|
[
"# Dataset Card for \"base_model_client_dataset_20231012_093051\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"base_model_client_dataset_20231012_093051\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"base_model_client_dataset_20231012_093051\"\n\nMore Information needed"
] |
7d55c82876d463b849330e5eb9aeb1c5202eb85c
|
# Dataset Card for "base_client_model_dataset_20231012_093233"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tr416/base_client_model_dataset_20231012_093233
|
[
"region:us"
] |
2023-10-12T08:32:34+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 75203880.0, "num_examples": 29285}, {"name": "test", "num_bytes": 760128.0, "num_examples": 296}], "download_size": 12789039, "dataset_size": 75964008.0}}
|
2023-10-12T08:32:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "base_client_model_dataset_20231012_093233"
More Information needed
|
[
"# Dataset Card for \"base_client_model_dataset_20231012_093233\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"base_client_model_dataset_20231012_093233\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"base_client_model_dataset_20231012_093233\"\n\nMore Information needed"
] |
7ed4712e6819b66d4c09f14ec9d4d991e49d4b5c
|
# Dataset Card for Evaluation run of harborwater/open-llama-3b-everything-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/harborwater/open-llama-3b-everything-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [harborwater/open-llama-3b-everything-v2](https://huggingface.co/harborwater/open-llama-3b-everything-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_harborwater__open-llama-3b-everything-v2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-29T00:43:57.732775](https://huggingface.co/datasets/open-llm-leaderboard/details_harborwater__open-llama-3b-everything-v2/blob/main/results_2023-10-29T00-43-57.732775.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0020973154362416107,
"em_stderr": 0.0004685065030368325,
"f1": 0.0560864093959733,
"f1_stderr": 0.0013597729822813858,
"acc": 0.341030820866541,
"acc_stderr": 0.008350924483766176
},
"harness|drop|3": {
"em": 0.0020973154362416107,
"em_stderr": 0.0004685065030368325,
"f1": 0.0560864093959733,
"f1_stderr": 0.0013597729822813858
},
"harness|gsm8k|5": {
"acc": 0.01592115238817286,
"acc_stderr": 0.0034478192723889915
},
"harness|winogrande|5": {
"acc": 0.6661404893449092,
"acc_stderr": 0.013254029695143358
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_harborwater__open-llama-3b-everything-v2
|
[
"region:us"
] |
2023-10-12T08:37:29+00:00
|
{"pretty_name": "Evaluation run of harborwater/open-llama-3b-everything-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [harborwater/open-llama-3b-everything-v2](https://huggingface.co/harborwater/open-llama-3b-everything-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_harborwater__open-llama-3b-everything-v2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-29T00:43:57.732775](https://huggingface.co/datasets/open-llm-leaderboard/details_harborwater__open-llama-3b-everything-v2/blob/main/results_2023-10-29T00-43-57.732775.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0020973154362416107,\n \"em_stderr\": 0.0004685065030368325,\n \"f1\": 0.0560864093959733,\n \"f1_stderr\": 0.0013597729822813858,\n \"acc\": 0.341030820866541,\n \"acc_stderr\": 0.008350924483766176\n },\n \"harness|drop|3\": {\n \"em\": 0.0020973154362416107,\n \"em_stderr\": 0.0004685065030368325,\n \"f1\": 0.0560864093959733,\n \"f1_stderr\": 0.0013597729822813858\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.01592115238817286,\n \"acc_stderr\": 0.0034478192723889915\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6661404893449092,\n \"acc_stderr\": 0.013254029695143358\n }\n}\n```", "repo_url": "https://huggingface.co/harborwater/open-llama-3b-everything-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|arc:challenge|25_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_29T00_43_57.732775", "path": ["**/details_harness|drop|3_2023-10-29T00-43-57.732775.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-29T00-43-57.732775.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_29T00_43_57.732775", "path": ["**/details_harness|gsm8k|5_2023-10-29T00-43-57.732775.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-29T00-43-57.732775.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hellaswag|10_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-12T09-37-10.252705.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-12T09-37-10.252705.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-12T09-37-10.252705.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_29T00_43_57.732775", "path": ["**/details_harness|winogrande|5_2023-10-29T00-43-57.732775.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-29T00-43-57.732775.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_12T09_37_10.252705", "path": ["results_2023-10-12T09-37-10.252705.parquet"]}, {"split": "2023_10_29T00_43_57.732775", "path": ["results_2023-10-29T00-43-57.732775.parquet"]}, {"split": "latest", "path": ["results_2023-10-29T00-43-57.732775.parquet"]}]}]}
|
2023-10-28T23:44:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of harborwater/open-llama-3b-everything-v2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model harborwater/open-llama-3b-everything-v2 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-29T00:43:57.732775(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of harborwater/open-llama-3b-everything-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model harborwater/open-llama-3b-everything-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-29T00:43:57.732775(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of harborwater/open-llama-3b-everything-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model harborwater/open-llama-3b-everything-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-29T00:43:57.732775(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
26,
31,
174,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of harborwater/open-llama-3b-everything-v2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model harborwater/open-llama-3b-everything-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-29T00:43:57.732775(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
9b101036b50f2dd623a1927ec7a5350757a4b598
|
# Dataset Card for "German_datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kristinashemet/German_datasets
|
[
"region:us"
] |
2023-10-12T09:05:47+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 259881583, "num_examples": 346965}], "download_size": 137269817, "dataset_size": 259881583}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-17T10:43:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "German_datasets"
More Information needed
|
[
"# Dataset Card for \"German_datasets\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"German_datasets\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"German_datasets\"\n\nMore Information needed"
] |
5adc4da32f4cb5092234096ffde7bef3e7317fa5
|
# TREC Conversational Assistance Track (CAsT)
There are currently few datasets appropriate for training and evaluating models for Conversational Information Seeking (CIS). The main aim of TREC CAsT is to advance research on conversational search systems. The goal of the track is to create a reusable benchmark for open-domain information centric conversational dialogues.
# Year 1 (TREC 2019)
* Read the [TREC 2019 Overview](https://arxiv.org/abs/2003.13624) paper.
## 2019 Data
### Topics
* [Training topics] - 30 example training topics
* [Training judgments] - The judgments are graded on a three point scale (2 very relevant, 1 relevant, and 0 not relevant).
* [Evaluation topics]- 50 evaluation topics
### Sample of Dataset
* Title: US Judicial history
* Description: Judicial history in the US including key court cases and what they established.
* Prompts:
1. What are the most important US Supreme Court cases?
2. What did plessy v. ferguson establish?
3. How about marbury vs madison?
4. Was it unanimous?
5. What was the implication of roe vs wade?
6. What were the main arguments?
7. What was the point of the brown v board of education?
8. What were the main arguments?
9. Why is it important today?
### Collection
* The corpus is a combination of three standard TREC collections: MARCO Ranking passages, Wikipedia (TREC CAR), and News (Washington Post)
* The [MS MARCO Passage Ranking collection](https://msmarco.blob.core.windows.net/msmarcoranking/collection.tar.gz) - This file only includes the passage id and passage text. For convenience, we also provide a passage id -> URL mapping file in TSV format [pid to URL file](http://boston.lti.cs.cmu.edu/vaibhav2/cast/marco_pas_url.tsv).
* The [TREC CAR paragraph collection v2.0](http://trec-car.cs.unh.edu/datareleases/v2.0/paragraphCorpus.v2.0.tar.xz)
* The [TREC Washington Post Corpus version 2](https://ir.nist.gov/wapo/WashingtonPost.v2.tar.gz): Note this is behind a password and requires an organizational agreement, to obtain it see: https://ir.nist.gov/wapo/
### Document ID format
* The document id format is `[collection_id_paragraph_id]` with collection id and paragraph id separated by an underscore.
* The collection ids are in the set: `{MARCO, CAR, WAPO}`.
* The paragraph ids are: standard provided by MARCO and CAR. For WAPO the paragraph ID is `[article_id-paragraph_index]` where the paragraph_index is the *starting from 1-based* index of the paragraph using the provided paragraph markup separated by a single dash.
* Example WaPo combined document id: `[WAPO_903cc1eab726b829294d1abdd755d5ab-1]`, or CAR: `[CAR_6869dee46ab12f0f7060874f7fc7b1c57d53144a]`
## Code and tools
* [TREC-CAsT Tools](https://github.com/gla-ial/trec-cast-tools) repository with code and scripts for processing data.
* The tools contain scripts for parsing the collection into standard indexing formats. It also provides APIs for working with the topics (in text, json, and protocol buffer formats).
|
satyanshu404/trec-cast-2019
|
[
"arxiv:2003.13624",
"region:us"
] |
2023-10-12T09:07:14+00:00
|
{}
|
2023-11-02T14:16:22+00:00
|
[
"2003.13624"
] |
[] |
TAGS
#arxiv-2003.13624 #region-us
|
# TREC Conversational Assistance Track (CAsT)
There are currently few datasets appropriate for training and evaluating models for Conversational Information Seeking (CIS). The main aim of TREC CAsT is to advance research on conversational search systems. The goal of the track is to create a reusable benchmark for open-domain information centric conversational dialogues.
# Year 1 (TREC 2019)
* Read the TREC 2019 Overview paper.
## 2019 Data
### Topics
* [Training topics] - 30 example training topics
* [Training judgments] - The judgments are graded on a three point scale (2 very relevant, 1 relevant, and 0 not relevant).
* [Evaluation topics]- 50 evaluation topics
### Sample of Dataset
* Title: US Judicial history
* Description: Judicial history in the US including key court cases and what they established.
* Prompts:
1. What are the most important US Supreme Court cases?
2. What did plessy v. ferguson establish?
3. How about marbury vs madison?
4. Was it unanimous?
5. What was the implication of roe vs wade?
6. What were the main arguments?
7. What was the point of the brown v board of education?
8. What were the main arguments?
9. Why is it important today?
### Collection
* The corpus is a combination of three standard TREC collections: MARCO Ranking passages, Wikipedia (TREC CAR), and News (Washington Post)
* The MS MARCO Passage Ranking collection - This file only includes the passage id and passage text. For convenience, we also provide a passage id -> URL mapping file in TSV format pid to URL file.
* The TREC CAR paragraph collection v2.0
* The TREC Washington Post Corpus version 2: Note this is behind a password and requires an organizational agreement, to obtain it see: URL
### Document ID format
* The document id format is '[collection_id_paragraph_id]' with collection id and paragraph id separated by an underscore.
* The collection ids are in the set: '{MARCO, CAR, WAPO}'.
* The paragraph ids are: standard provided by MARCO and CAR. For WAPO the paragraph ID is '[article_id-paragraph_index]' where the paragraph_index is the *starting from 1-based* index of the paragraph using the provided paragraph markup separated by a single dash.
* Example WaPo combined document id: '[WAPO_903cc1eab726b829294d1abdd755d5ab-1]', or CAR: '[CAR_6869dee46ab12f0f7060874f7fc7b1c57d53144a]'
## Code and tools
* TREC-CAsT Tools repository with code and scripts for processing data.
* The tools contain scripts for parsing the collection into standard indexing formats. It also provides APIs for working with the topics (in text, json, and protocol buffer formats).
|
[
"# TREC Conversational Assistance Track (CAsT) \n\nThere are currently few datasets appropriate for training and evaluating models for Conversational Information Seeking (CIS). The main aim of TREC CAsT is to advance research on conversational search systems. The goal of the track is to create a reusable benchmark for open-domain information centric conversational dialogues.",
"# Year 1 (TREC 2019)\n* Read the TREC 2019 Overview paper.",
"## 2019 Data",
"### Topics\n * [Training topics] - 30 example training topics\n * [Training judgments] - The judgments are graded on a three point scale (2 very relevant, 1 relevant, and 0 not relevant). \n * [Evaluation topics]- 50 evaluation topics\n\n ### Sample of Dataset \n * Title: US Judicial history\n * Description: Judicial history in the US including key court cases and what they established.\n * Prompts:\n 1. What are the most important US Supreme Court cases?\n 2. What did plessy v. ferguson establish?\n 3. How about marbury vs madison?\n 4. Was it unanimous?\n 5. What was the implication of roe vs wade?\n 6. What were the main arguments?\n 7. What was the point of the brown v board of education?\n 8. What were the main arguments?\n 9. Why is it important today?",
"### Collection\n * The corpus is a combination of three standard TREC collections: MARCO Ranking passages, Wikipedia (TREC CAR), and News (Washington Post)\n * The MS MARCO Passage Ranking collection - This file only includes the passage id and passage text. For convenience, we also provide a passage id -> URL mapping file in TSV format pid to URL file. \n * The TREC CAR paragraph collection v2.0\n * The TREC Washington Post Corpus version 2: Note this is behind a password and requires an organizational agreement, to obtain it see: URL",
"### Document ID format\n * The document id format is '[collection_id_paragraph_id]' with collection id and paragraph id separated by an underscore.\n * The collection ids are in the set: '{MARCO, CAR, WAPO}'. \n * The paragraph ids are: standard provided by MARCO and CAR. For WAPO the paragraph ID is '[article_id-paragraph_index]' where the paragraph_index is the *starting from 1-based* index of the paragraph using the provided paragraph markup separated by a single dash. \n * Example WaPo combined document id: '[WAPO_903cc1eab726b829294d1abdd755d5ab-1]', or CAR: '[CAR_6869dee46ab12f0f7060874f7fc7b1c57d53144a]'",
"## Code and tools\n* TREC-CAsT Tools repository with code and scripts for processing data. \n* The tools contain scripts for parsing the collection into standard indexing formats. It also provides APIs for working with the topics (in text, json, and protocol buffer formats)."
] |
[
"TAGS\n#arxiv-2003.13624 #region-us \n",
"# TREC Conversational Assistance Track (CAsT) \n\nThere are currently few datasets appropriate for training and evaluating models for Conversational Information Seeking (CIS). The main aim of TREC CAsT is to advance research on conversational search systems. The goal of the track is to create a reusable benchmark for open-domain information centric conversational dialogues.",
"# Year 1 (TREC 2019)\n* Read the TREC 2019 Overview paper.",
"## 2019 Data",
"### Topics\n * [Training topics] - 30 example training topics\n * [Training judgments] - The judgments are graded on a three point scale (2 very relevant, 1 relevant, and 0 not relevant). \n * [Evaluation topics]- 50 evaluation topics\n\n ### Sample of Dataset \n * Title: US Judicial history\n * Description: Judicial history in the US including key court cases and what they established.\n * Prompts:\n 1. What are the most important US Supreme Court cases?\n 2. What did plessy v. ferguson establish?\n 3. How about marbury vs madison?\n 4. Was it unanimous?\n 5. What was the implication of roe vs wade?\n 6. What were the main arguments?\n 7. What was the point of the brown v board of education?\n 8. What were the main arguments?\n 9. Why is it important today?",
"### Collection\n * The corpus is a combination of three standard TREC collections: MARCO Ranking passages, Wikipedia (TREC CAR), and News (Washington Post)\n * The MS MARCO Passage Ranking collection - This file only includes the passage id and passage text. For convenience, we also provide a passage id -> URL mapping file in TSV format pid to URL file. \n * The TREC CAR paragraph collection v2.0\n * The TREC Washington Post Corpus version 2: Note this is behind a password and requires an organizational agreement, to obtain it see: URL",
"### Document ID format\n * The document id format is '[collection_id_paragraph_id]' with collection id and paragraph id separated by an underscore.\n * The collection ids are in the set: '{MARCO, CAR, WAPO}'. \n * The paragraph ids are: standard provided by MARCO and CAR. For WAPO the paragraph ID is '[article_id-paragraph_index]' where the paragraph_index is the *starting from 1-based* index of the paragraph using the provided paragraph markup separated by a single dash. \n * Example WaPo combined document id: '[WAPO_903cc1eab726b829294d1abdd755d5ab-1]', or CAR: '[CAR_6869dee46ab12f0f7060874f7fc7b1c57d53144a]'",
"## Code and tools\n* TREC-CAsT Tools repository with code and scripts for processing data. \n* The tools contain scripts for parsing the collection into standard indexing formats. It also provides APIs for working with the topics (in text, json, and protocol buffer formats)."
] |
[
15,
84,
18,
3,
186,
120,
199,
68
] |
[
"passage: TAGS\n#arxiv-2003.13624 #region-us \n# TREC Conversational Assistance Track (CAsT) \n\nThere are currently few datasets appropriate for training and evaluating models for Conversational Information Seeking (CIS). The main aim of TREC CAsT is to advance research on conversational search systems. The goal of the track is to create a reusable benchmark for open-domain information centric conversational dialogues.# Year 1 (TREC 2019)\n* Read the TREC 2019 Overview paper.## 2019 Data### Topics\n * [Training topics] - 30 example training topics\n * [Training judgments] - The judgments are graded on a three point scale (2 very relevant, 1 relevant, and 0 not relevant). \n * [Evaluation topics]- 50 evaluation topics\n\n ### Sample of Dataset \n * Title: US Judicial history\n * Description: Judicial history in the US including key court cases and what they established.\n * Prompts:\n 1. What are the most important US Supreme Court cases?\n 2. What did plessy v. ferguson establish?\n 3. How about marbury vs madison?\n 4. Was it unanimous?\n 5. What was the implication of roe vs wade?\n 6. What were the main arguments?\n 7. What was the point of the brown v board of education?\n 8. What were the main arguments?\n 9. Why is it important today?### Collection\n * The corpus is a combination of three standard TREC collections: MARCO Ranking passages, Wikipedia (TREC CAR), and News (Washington Post)\n * The MS MARCO Passage Ranking collection - This file only includes the passage id and passage text. For convenience, we also provide a passage id -> URL mapping file in TSV format pid to URL file. \n * The TREC CAR paragraph collection v2.0\n * The TREC Washington Post Corpus version 2: Note this is behind a password and requires an organizational agreement, to obtain it see: URL"
] |
0f5df9be49eaf3dd4686491852df7e15e3fae02a
|
# Dataset Card for "xlmr_eval2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/xlmr_eval2
|
[
"region:us"
] |
2023-10-12T09:14:39+00:00
|
{"dataset_info": {"features": [{"name": "domain_label", "dtype": "int64"}, {"name": "pass_label", "dtype": "int64"}, {"name": "input", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 19326005, "num_examples": 11590}], "download_size": 5464964, "dataset_size": 19326005}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T09:26:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xlmr_eval2"
More Information needed
|
[
"# Dataset Card for \"xlmr_eval2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xlmr_eval2\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xlmr_eval2\"\n\nMore Information needed"
] |
ef3ce1dc56ea21fcb3bf3a161247b67316958540
|
# Dataset Card for "prompted_hf_cot_gsm8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Dahoas/prompted_hf_cot_gsm8k
|
[
"region:us"
] |
2023-10-12T09:20:39+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 17216169, "num_examples": 7217}, {"name": "test", "num_bytes": 3184819, "num_examples": 1319}, {"name": "val", "num_bytes": 613398, "num_examples": 256}], "download_size": 10146546, "dataset_size": 21014386}}
|
2023-10-16T09:36:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "prompted_hf_cot_gsm8k"
More Information needed
|
[
"# Dataset Card for \"prompted_hf_cot_gsm8k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"prompted_hf_cot_gsm8k\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"prompted_hf_cot_gsm8k\"\n\nMore Information needed"
] |
7373bcf114c847880e9a4571c8dc38b6975b1a41
|
# Dataset Card for "xlmr_int_pr_sw_trn_ep4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/xlmr_int_pr_sw_trn_ep4
|
[
"region:us"
] |
2023-10-12T09:28:02+00:00
|
{"dataset_info": {"features": [{"name": "domain_label", "dtype": "int64"}, {"name": "pass_label", "dtype": "int64"}, {"name": "input", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 575211680, "num_examples": 452280}], "download_size": 164056118, "dataset_size": 575211680}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T09:28:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xlmr_int_pr_sw_trn_ep4"
More Information needed
|
[
"# Dataset Card for \"xlmr_int_pr_sw_trn_ep4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xlmr_int_pr_sw_trn_ep4\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xlmr_int_pr_sw_trn_ep4\"\n\nMore Information needed"
] |
96ca97c1f15dd6567a36b44172b12bda19f4d25f
|
# Dataset Card for "xlmr_int_pr_sw_trn_ep3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/xlmr_int_pr_sw_trn_ep3
|
[
"region:us"
] |
2023-10-12T09:34:48+00:00
|
{"dataset_info": {"features": [{"name": "domain_label", "dtype": "int64"}, {"name": "pass_label", "dtype": "int64"}, {"name": "input", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 431400213, "num_examples": 339210}], "download_size": 123139414, "dataset_size": 431400213}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T09:35:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xlmr_int_pr_sw_trn_ep3"
More Information needed
|
[
"# Dataset Card for \"xlmr_int_pr_sw_trn_ep3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xlmr_int_pr_sw_trn_ep3\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xlmr_int_pr_sw_trn_ep3\"\n\nMore Information needed"
] |
f1fc6b296b28a8bca8db9535455ffa00ab2abcda
|
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
Inshorts News dataset Inshorts provides a news summary in 60 words or less. Inshorts is a news service that offers short summaries of news from around the web. This dataset contains headlines and a summary of news items and their source.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** An abstractive text summarization technique using transformer model with self-attention mechanism
- https://paperswithcode.com/paper/an-abstractive-text-summarization-technique
Neural Computing and Applications 2023
Sandeep Kumar, Arun Solanki
Creating a summarized version of a text document that still conveys precise meaning is an incredibly complex endeavor in natural language processing (NLP). Abstract text summarization (ATS) is the process of using facts from source sentences and merging them into concise representations while maintaining the content and intent of the text. Manually summarizing large amounts of text are challenging and time-consuming for humans. Therefore, text summarization has become an exciting research focus in NLP. This research paper proposed an ATS model using a Transformer Technique with Self-Attention Mechanism (T2SAM). The self-attention mechanism is added to the transformer to solve the problem of coreference in text. This makes the system to understand the text better. The proposed T2SAM model improves the performance of text summarization. It is trained on the Inshorts News dataset combined with the DUC-2004 shared tasks dataset. The performance of the proposed model has been evaluated using the ROUGE metrics, and it has been shown to outperform the existing state-of-the-art baseline models. The proposed model gives the training loss minimum to 1.8220 from 10.3058 (at the starting point) up to 30 epochs, and it achieved model accuracy 48.50% F1-Score on both the Inshorts and DUC-2004 news datasets.
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
Kaggle and Inshort news app
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
web scrapping
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
https://doi.org/10.1007/s00521-023-08687-7
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]
|
sandeep16064/news_summary
|
[
"region:us"
] |
2023-10-12T09:50:33+00:00
|
{}
|
2023-10-12T10:36:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
Inshorts News dataset Inshorts provides a news summary in 60 words or less. Inshorts is a news service that offers short summaries of news from around the web. This dataset contains headlines and a summary of news items and their source.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]: An abstractive text summarization technique using transformer model with self-attention mechanism
- URL
Neural Computing and Applications 2023
Sandeep Kumar, Arun Solanki
Creating a summarized version of a text document that still conveys precise meaning is an incredibly complex endeavor in natural language processing (NLP). Abstract text summarization (ATS) is the process of using facts from source sentences and merging them into concise representations while maintaining the content and intent of the text. Manually summarizing large amounts of text are challenging and time-consuming for humans. Therefore, text summarization has become an exciting research focus in NLP. This research paper proposed an ATS model using a Transformer Technique with Self-Attention Mechanism (T2SAM). The self-attention mechanism is added to the transformer to solve the problem of coreference in text. This makes the system to understand the text better. The proposed T2SAM model improves the performance of text summarization. It is trained on the Inshorts News dataset combined with the DUC-2004 shared tasks dataset. The performance of the proposed model has been evaluated using the ROUGE metrics, and it has been shown to outperform the existing state-of-the-art baseline models. The proposed model gives the training loss minimum to 1.8220 from 10.3058 (at the starting point) up to 30 epochs, and it achieved model accuracy 48.50% F1-Score on both the Inshorts and DUC-2004 news datasets.
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
Kaggle and Inshort news app
#### Data Collection and Processing
web scrapping
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
URL
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
|
[
"# Dataset Card for Dataset Name\n\n\n\nInshorts News dataset Inshorts provides a news summary in 60 words or less. Inshorts is a news service that offers short summaries of news from around the web. This dataset contains headlines and a summary of news items and their source.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: An abstractive text summarization technique using transformer model with self-attention mechanism\n- URL\n\nNeural Computing and Applications 2023\nSandeep Kumar, Arun Solanki\n\nCreating a summarized version of a text document that still conveys precise meaning is an incredibly complex endeavor in natural language processing (NLP). Abstract text summarization (ATS) is the process of using facts from source sentences and merging them into concise representations while maintaining the content and intent of the text. Manually summarizing large amounts of text are challenging and time-consuming for humans. Therefore, text summarization has become an exciting research focus in NLP. This research paper proposed an ATS model using a Transformer Technique with Self-Attention Mechanism (T2SAM). The self-attention mechanism is added to the transformer to solve the problem of coreference in text. This makes the system to understand the text better. The proposed T2SAM model improves the performance of text summarization. It is trained on the Inshorts News dataset combined with the DUC-2004 shared tasks dataset. The performance of the proposed model has been evaluated using the ROUGE metrics, and it has been shown to outperform the existing state-of-the-art baseline models. The proposed model gives the training loss minimum to 1.8220 from 10.3058 (at the starting point) up to 30 epochs, and it achieved model accuracy 48.50% F1-Score on both the Inshorts and DUC-2004 news datasets.\n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data\n\n\nKaggle and Inshort news app",
"#### Data Collection and Processing\n\n\nweb scrapping",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\nURL\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nInshorts News dataset Inshorts provides a news summary in 60 words or less. Inshorts is a news service that offers short summaries of news from around the web. This dataset contains headlines and a summary of news items and their source.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: An abstractive text summarization technique using transformer model with self-attention mechanism\n- URL\n\nNeural Computing and Applications 2023\nSandeep Kumar, Arun Solanki\n\nCreating a summarized version of a text document that still conveys precise meaning is an incredibly complex endeavor in natural language processing (NLP). Abstract text summarization (ATS) is the process of using facts from source sentences and merging them into concise representations while maintaining the content and intent of the text. Manually summarizing large amounts of text are challenging and time-consuming for humans. Therefore, text summarization has become an exciting research focus in NLP. This research paper proposed an ATS model using a Transformer Technique with Self-Attention Mechanism (T2SAM). The self-attention mechanism is added to the transformer to solve the problem of coreference in text. This makes the system to understand the text better. The proposed T2SAM model improves the performance of text summarization. It is trained on the Inshorts News dataset combined with the DUC-2004 shared tasks dataset. The performance of the proposed model has been evaluated using the ROUGE metrics, and it has been shown to outperform the existing state-of-the-art baseline models. The proposed model gives the training loss minimum to 1.8220 from 10.3058 (at the starting point) up to 30 epochs, and it achieved model accuracy 48.50% F1-Score on both the Inshorts and DUC-2004 news datasets.\n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data\n\n\nKaggle and Inshort news app",
"#### Data Collection and Processing\n\n\nweb scrapping",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\nURL\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
[
6,
63,
4,
40,
380,
3,
4,
9,
6,
5,
7,
11,
10,
10,
9,
5,
9,
8,
10,
47,
8,
7,
10,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nInshorts News dataset Inshorts provides a news summary in 60 words or less. Inshorts is a news service that offers short summaries of news from around the web. This dataset contains headlines and a summary of news items and their source.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: An abstractive text summarization technique using transformer model with self-attention mechanism\n- URL\n\nNeural Computing and Applications 2023\nSandeep Kumar, Arun Solanki\n\nCreating a summarized version of a text document that still conveys precise meaning is an incredibly complex endeavor in natural language processing (NLP). Abstract text summarization (ATS) is the process of using facts from source sentences and merging them into concise representations while maintaining the content and intent of the text. Manually summarizing large amounts of text are challenging and time-consuming for humans. Therefore, text summarization has become an exciting research focus in NLP. This research paper proposed an ATS model using a Transformer Technique with Self-Attention Mechanism (T2SAM). The self-attention mechanism is added to the transformer to solve the problem of coreference in text. This makes the system to understand the text better. The proposed T2SAM model improves the performance of text summarization. It is trained on the Inshorts News dataset combined with the DUC-2004 shared tasks dataset. The performance of the proposed model has been evaluated using the ROUGE metrics, and it has been shown to outperform the existing state-of-the-art baseline models. The proposed model gives the training loss minimum to 1.8220 from 10.3058 (at the starting point) up to 30 epochs, and it achieved model accuracy 48.50% F1-Score on both the Inshorts and DUC-2004 news datasets.\n- Demo [optional]:## Uses### Direct Use"
] |
62ca06f122a7f93df7d2641f2305d5073c0351e5
|
# Dataset Card for "hf-stack-v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
smangrul/hf-stack-v3
|
[
"region:us"
] |
2023-10-12T10:13:54+00:00
|
{"dataset_info": {"features": [{"name": "repo_id", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 84459082, "num_examples": 5139}], "download_size": 27283429, "dataset_size": 84459082}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T10:13:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "hf-stack-v3"
More Information needed
|
[
"# Dataset Card for \"hf-stack-v3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"hf-stack-v3\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"hf-stack-v3\"\n\nMore Information needed"
] |
f7300cb56d20414d5fecac3d34a3a4809cdd2d27
|
# Dataset Card for "indic-xnli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ai4bharat/IndicXNLI-Translated
|
[
"region:us"
] |
2023-10-12T10:55:45+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "itv2 hi premise", "dtype": "string"}, {"name": "itv2 hi hypothesis", "dtype": "string"}, {"name": "itv2 gu premise", "dtype": "string"}, {"name": "itv2 gu hypothesis", "dtype": "string"}, {"name": "itv2 kn premise", "dtype": "string"}, {"name": "itv2 kn hypothesis", "dtype": "string"}, {"name": "itv2 ml premise", "dtype": "string"}, {"name": "itv2 ml hypothesis", "dtype": "string"}, {"name": "itv2 mr premise", "dtype": "string"}, {"name": "itv2 mr hypothesis", "dtype": "string"}, {"name": "itv2 or premise", "dtype": "string"}, {"name": "itv2 or hypothesis", "dtype": "string"}, {"name": "itv2 pa premise", "dtype": "string"}, {"name": "itv2 pa hypothesis", "dtype": "string"}, {"name": "itv2 bn premise", "dtype": "string"}, {"name": "itv2 bn hypothesis", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 8389920, "num_examples": 5010}, {"name": "validation", "num_bytes": 4161518, "num_examples": 2490}], "download_size": 4269813, "dataset_size": 12551438}}
|
2023-10-13T12:46:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "indic-xnli"
More Information needed
|
[
"# Dataset Card for \"indic-xnli\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"indic-xnli\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"indic-xnli\"\n\nMore Information needed"
] |
a97b744c8a01f77cfee745183bbf8ef85a07b691
|
# Dataset Card for "FairRationales"
## Dataset Summary
We present a new collection of annotations for a subset of CoS-E [[1]](#1), DynaSent [[2]](#2), and SST [[3]](#3)/Zuco [[4]](#4) with demographics-augmented annotations, balanced across age and ethnicity.
We asked participants to choose a label and then provide supporting evidence (rationales) based on the input sentence for their answer.
Existing rationale datasets are typically constructed by giving annotators 'gold standard' labels,
and having them provide rationales for these labels.
Instead, we let annotators provide rationales for labels they choose themselves. This lets them engage
in the decision process, but it also acknowledges
that annotators with different backgrounds may disagree on classification decisions. Explaining other
people’s choices is error-prone [[5]](#5), and we do not want to bias the rationale
annotations by providing labels that align better
with the intuitions of some demographics than with
those of others.
Our annotators are balanced across age and ethnicity for six demographic groups, defined by
ethnicity {Black/African American, White/Caucasian, Latino/Hispanic} and age {Old, Young}.
Therefore, we can refer to our groups as their cross-product: **{BO, BY, WO, WY, LO, LY}**.
## Dataset Details
### DynaSent
We re-annotate N=480 instances
six times (for six demographic groups), comprising
240 instances labeled as positive, and 240 instances
labeled as negative in the DynaSent Round 2 **test**
set (see [[2]](#2)). This amounts to 2,880
annotations, in total.
To annotate rationales, we formulate the task as
marking 'supporting evidence' for the label, following how the task is defined by [[6]](#6). Specifically, we ask annotators to mark
all the words, in the sentence, they think shows
evidence for their chosen label.
#### >Our annotations:
negative 1555 |
positive 1435 |
no sentiment 470
Total 3460
Note that all the data is uploaded under a single 'train' split (read [## Uses](uses) for further details).
### SST2
We re-annotate N=263 instances six
times (for six demographic groups), which are all
the positive and negative instances from the Zuco*
dataset of Hollenstein et al. (2018), comprising a
**mixture of train, validation and test** set instances
from SST-2, *which should be removed from the original SST
data before training any model*.
These 263 reannotated instances do not contain any instances originally marked as `neutral` (or not conveying sentiment) because rationale annotation for neutral instances is ill-defined. Yet,
we still allow annotators to evaluate a sentence as
neutral, since we do not want to force our annotators to provide rationales for positive and negative
sentiment that they do not see.
*The Zuco data contains eye-tracking data for 400 instances from SST. By annotating some of these with rationales,
we add an extra layer of information for future research.
#### >Our annotations:
positive 1027 |
negative 900 |
no sentiment 163
Total 2090
Note that all the data is uploaded under a single 'train' split (read [## Uses](uses) for further details).
### CoS-E
We use the simplified version of CoS-E released by [[6]](#6).
We re-annotate N=500 instances from
the CoS-E **test** set six times (for six demographic groups)
and ask annotators to firstly select the answer to
the question that they find most correct and sensible, and then mark words that justifies that answer.
Following [[7]](#7), we specify the
rationale task with a wording that should guide
annotators to make short, precise rationale annotations:
‘For each word in the question, if you
think that removing it will decrease your
confidence toward your chosen label,
please mark it.’
#### >Our annotations:
Total 3760
Note that all the data is uploaded under a single 'train' split (read [## Uses](uses) for further details).
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/terne/Being_Right_for_Whose_Right_Reasons
- **Paper:** [Being Right for Whose Right Reasons?](https://aclanthology.org/2023.acl-long.59/)
<a id="uses">## Uses</a>
<!-- Address questions around how the dataset is intended to be used. -->
In our paper, we present a collection of three
existing datasets (SST2, DynaSent and Cos-E) with demographics-augmented annotations to enable profiling of models, i.e., quantifying their alignment (or agreement) with rationales provided
by different socio-demographic groups. Such profiling enables us to ask whose right reasons models are being right for and fosters future research on performance equality/robustness.
For each dataset, we provide the data under a unique **'train'** split due to the current limitation of not being possible to upload a dataset with a single *'test'* split.
Note, however, that the original itended used of these collection of datasets was to **test** the quality & alignment of post-hoc explainability methods.
If you use it following different splits, please clarify it to ease reproducibility of your work.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
| Variable | Description |
| --- | --- |
| QID | The ID of the Question (i.e. the annotation element/sentence) in the Qualtrics survey. Every second question asked for the classification and every other asked for the rationale, of the classification, to be marked. These two questions and answers for the same sentence is merged to one row and therefore the QID looks as if every second is skipped. |
| text_id | A numerical ID given to each unique text/sentence for easy sorting before comparing annotations across groups. |
| sentence | The text/sentence that is annotated, in it's original formatting. |
| label | The (new) label given by the respective annotator/participant from Prolific. |
| label_index | The numerical format of the (new) label. |
| original_label | The label from the original dataset (Cose/Dynasent/SST). |
| rationale | The tokens marked as rationales by our annotators. |
| rationale_index | The indeces of the tokens marked as rationales. In the processed files the index start at 0. However in the unprocessed files ("_all.csv", "_before_exclussions.csv") the index starts at 1.|
| rationale_binary | A binary version of the rationales where a token marked as part of the rationale = 1 and tokens not marked = 0. |
| age | The reported age of the annotator/participant (i.e. their survey response). This may be different from the age-interval the participant was recruited by (see recruitment_age). |
| recruitment_age | The age interval specified for the Prolific job to recruit the participant by. A mismatch between this and the participant's reported age, when asked in our survey, may mean a number of things, such as: Prolific's information is wrong or outdated; the participant made a mistake when answering the question; the participant was inattentive. |
| ethnicity | The reported ethnicity of the annotator/participant. This may be different from the ethnicity the participant was recruited by (see recruitment_ethnicity). |
| recruitment_ethnicity | The ethnicity specified for the Prolific job to recruit the participant by. Sometimes there is a mismatch between the information Prolific has on participants (which we use for recruitment) and what the participants report when asked again in the survey/task. This seems especially prevalent with some ethnicities, likely because participants may in reality identify with more than one ethnic group. |
| gender | The reported gender of the annotator/participant. |
| english_proficiency | The reported English-speaking ability (proxy for English proficiency) of the annotator/participant. Options were "Not well", "Well" or "Very well". |
| attentioncheck | All participants were given a simple attention check question at the very end of the Qualtrics survey (i.e. after annotation) which was either PASSED or FAILED. Participants who failed the check were still paid for their work, but their response should be excluded from the analysis. |
| group_id | An id describing the socio-demographic subgroup a participant belongs to and was recruited by. |
| originaldata_id | The id given to the text/sentence in the original dataset. In the case of SST data, this refers to ids within the Zuco dataset – a subset of SST which was used in our study.|
| annotator_ID | Anonymised annotator ID to enable analysis such as annotators (dis)agreement |
| sst2_id | The processed SST annotations contain an extra column with the index of the text in the SST-2 dataset. -1 means that we were unable to match the text to an instance in SST-2 |
| sst2_split | The processed SST annotations contain an extra column refering to the set which the instance appears in within SST-2. Some instances a part of the train set and should therefore be removed before training a model on SST-2 and testing on our annotations. |
## Dataset Creation
### Curation Rationale
Terne Sasha Thorn Jakobsen, Laura Cabello, Anders Søgaard. Being Right for Whose Right Reasons?
In the Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
#### Annotation process
We refer to our [paper](https://aclanthology.org/2023.acl-long.59/) for further details on the data (Section 3), and specifically on the Annotation Process (Section 3.1) and Annotator Population (Section 3.2).
#### Who are the annotators?
Annotators were recruited via Prolific and consented to the use of their responses and demographic information for research purposes.
The annotation tasks were conducted through Qualtrics surveys. The exact surveys can be found [here](https://github.com/terne/Being_Right_for_Whose_Right_Reasons/tree/main/data/qualtrics_survey_exports).
## References
<a id="1">[1]</a>
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain Yourself! Leveraging Language Models for Commonsense Reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932–4942, Florence, Italy. Association for Computational Linguistics.
<a id="2">[2]</a>
Christopher Potts, Zhengxuan Wu, Atticus Geiger, and Douwe Kiela. 2021. DynaSent: A Dynamic Benchmark for Sentiment Analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2388–2404, Online. Association for Computational Linguistics.
<a id="3">[3]</a>
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
<a id="4">[4]</a>
Nora Hollenstein, Jonathan Rotsztejn, Marius Troendle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. Zuco, a simultaneous eeg and eye-tracking resource for natural sentence reading. Scientific Data.
<a id="5">[5]</a>
Kate Barasz and Tami Kim. 2022. Choice perception: Making sense (and nonsense) of others’ decisions. Current opinion in psychology, 43:176–181.
<a id="6">[6]</a>
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models.
<a id="7">[7]</a>
Cheng-Han Chiang and Hung-yi Lee. 2022. Reexamining human annotations for interpretable nlp.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```bibtex
@inproceedings{thorn-jakobsen-etal-2023-right,
title = "Being Right for Whose Right Reasons?",
author = "Thorn Jakobsen, Terne Sasha and
Cabello, Laura and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.59",
doi = "10.18653/v1/2023.acl-long.59",
pages = "1033--1054",
abstract = "Explainability methods are used to benchmark the extent to which model predictions align with human rationales i.e., are {`}right for the right reasons{'}. Previous work has failed to acknowledge, however, that what counts as a rationale is sometimes subjective. This paper presents what we think is a first of its kind, a collection of human rationale annotations augmented with the annotators demographic information. We cover three datasets spanning sentiment analysis and common-sense reasoning, and six demographic groups (balanced across age and ethnicity). Such data enables us to ask both what demographics our predictions align with and whose reasoning patterns our models{'} rationales align with. We find systematic inter-group annotator disagreement and show how 16 Transformer-based models align better with rationales provided by certain demographic groups: We find that models are biased towards aligning best with older and/or white annotators. We zoom in on the effects of model size and model distillation, finding {--}contrary to our expectations{--} negative correlations between model size and rationale agreement as well as no evidence that either model size or model distillation improves fairness.",
}
```
## Dataset Card Contact
Thanks to [@lautel](https://github.com/lautel) for adding this dataset.
|
coastalcph/fair-rationales
|
[
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"source_datasets:extended",
"language:en",
"license:mit",
"bias",
"fairness",
"rationale",
"demographic",
"region:us"
] |
2023-10-12T10:57:58+00:00
|
{"annotations_creators": ["crowdsourced"], "language": ["en"], "license": "mit", "source_datasets": ["extended"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification", "open-domain-qa"], "pretty_name": "FairRationales", "tags": ["bias", "fairness", "rationale", "demographic"]}
|
2023-10-13T11:54:10+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #task_ids-sentiment-classification #task_ids-open-domain-qa #annotations_creators-crowdsourced #source_datasets-extended #language-English #license-mit #bias #fairness #rationale #demographic #region-us
|
Dataset Card for "FairRationales"
=================================
Dataset Summary
---------------
We present a new collection of annotations for a subset of CoS-E [[1]](#1), DynaSent [[2]](#2), and SST [[3]](#3)/Zuco [[4]](#4) with demographics-augmented annotations, balanced across age and ethnicity.
We asked participants to choose a label and then provide supporting evidence (rationales) based on the input sentence for their answer.
Existing rationale datasets are typically constructed by giving annotators 'gold standard' labels,
and having them provide rationales for these labels.
Instead, we let annotators provide rationales for labels they choose themselves. This lets them engage
in the decision process, but it also acknowledges
that annotators with different backgrounds may disagree on classification decisions. Explaining other
people’s choices is error-prone [[5]](#5), and we do not want to bias the rationale
annotations by providing labels that align better
with the intuitions of some demographics than with
those of others.
Our annotators are balanced across age and ethnicity for six demographic groups, defined by
ethnicity {Black/African American, White/Caucasian, Latino/Hispanic} and age {Old, Young}.
Therefore, we can refer to our groups as their cross-product: {BO, BY, WO, WY, LO, LY}.
Dataset Details
---------------
### DynaSent
We re-annotate N=480 instances
six times (for six demographic groups), comprising
240 instances labeled as positive, and 240 instances
labeled as negative in the DynaSent Round 2 test
set (see [[2]](#2)). This amounts to 2,880
annotations, in total.
To annotate rationales, we formulate the task as
marking 'supporting evidence' for the label, following how the task is defined by [[6]](#6). Specifically, we ask annotators to mark
all the words, in the sentence, they think shows
evidence for their chosen label.
#### >Our annotations:
negative 1555 |
positive 1435 |
no sentiment 470
Total 3460
Note that all the data is uploaded under a single 'train' split (read ## Uses for further details).
### SST2
We re-annotate N=263 instances six
times (for six demographic groups), which are all
the positive and negative instances from the Zuco\*
dataset of Hollenstein et al. (2018), comprising a
mixture of train, validation and test set instances
from SST-2, *which should be removed from the original SST
data before training any model*.
These 263 reannotated instances do not contain any instances originally marked as 'neutral' (or not conveying sentiment) because rationale annotation for neutral instances is ill-defined. Yet,
we still allow annotators to evaluate a sentence as
neutral, since we do not want to force our annotators to provide rationales for positive and negative
sentiment that they do not see.
\*The Zuco data contains eye-tracking data for 400 instances from SST. By annotating some of these with rationales,
we add an extra layer of information for future research.
#### >Our annotations:
positive 1027 |
negative 900 |
no sentiment 163
Total 2090
Note that all the data is uploaded under a single 'train' split (read ## Uses for further details).
### CoS-E
We use the simplified version of CoS-E released by [[6]](#6).
We re-annotate N=500 instances from
the CoS-E test set six times (for six demographic groups)
and ask annotators to firstly select the answer to
the question that they find most correct and sensible, and then mark words that justifies that answer.
Following [[7]](#7), we specify the
rationale task with a wording that should guide
annotators to make short, precise rationale annotations:
‘For each word in the question, if you
think that removing it will decrease your
confidence toward your chosen label,
please mark it.’
#### >Our annotations:
Total 3760
Note that all the data is uploaded under a single 'train' split (read ## Uses for further details).
### Dataset Sources
* Repository: URL
* Paper: Being Right for Whose Right Reasons?
## Uses
In our paper, we present a collection of three
existing datasets (SST2, DynaSent and Cos-E) with demographics-augmented annotations to enable profiling of models, i.e., quantifying their alignment (or agreement) with rationales provided
by different socio-demographic groups. Such profiling enables us to ask whose right reasons models are being right for and fosters future research on performance equality/robustness.
For each dataset, we provide the data under a unique 'train' split due to the current limitation of not being possible to upload a dataset with a single *'test'* split.
Note, however, that the original itended used of these collection of datasets was to test the quality & alignment of post-hoc explainability methods.
If you use it following different splits, please clarify it to ease reproducibility of your work.
Dataset Structure
-----------------
Dataset Creation
----------------
### Curation Rationale
Terne Sasha Thorn Jakobsen, Laura Cabello, Anders Søgaard. Being Right for Whose Right Reasons?
In the Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
#### Annotation process
We refer to our paper for further details on the data (Section 3), and specifically on the Annotation Process (Section 3.1) and Annotator Population (Section 3.2).
#### Who are the annotators?
Annotators were recruited via Prolific and consented to the use of their responses and demographic information for research purposes.
The annotation tasks were conducted through Qualtrics surveys. The exact surveys can be found here.
References
----------
[1]
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain Yourself! Leveraging Language Models for Commonsense Reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932–4942, Florence, Italy. Association for Computational Linguistics.
[2]
Christopher Potts, Zhengxuan Wu, Atticus Geiger, and Douwe Kiela. 2021. DynaSent: A Dynamic Benchmark for Sentiment Analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2388–2404, Online. Association for Computational Linguistics.
[3]
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
[4]
Nora Hollenstein, Jonathan Rotsztejn, Marius Troendle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. Zuco, a simultaneous eeg and eye-tracking resource for natural sentence reading. Scientific Data.
[5]
Kate Barasz and Tami Kim. 2022. Choice perception: Making sense (and nonsense) of others’ decisions. Current opinion in psychology, 43:176–181.
[6]
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models.
[7]
Cheng-Han Chiang and Hung-yi Lee. 2022. Reexamining human annotations for interpretable nlp.
Dataset Card Contact
--------------------
Thanks to @lautel for adding this dataset.
|
[
"### DynaSent\n\n\nWe re-annotate N=480 instances\nsix times (for six demographic groups), comprising\n240 instances labeled as positive, and 240 instances\nlabeled as negative in the DynaSent Round 2 test\nset (see [[2]](#2)). This amounts to 2,880\nannotations, in total.\nTo annotate rationales, we formulate the task as\nmarking 'supporting evidence' for the label, following how the task is defined by [[6]](#6). Specifically, we ask annotators to mark\nall the words, in the sentence, they think shows\nevidence for their chosen label.",
"#### >Our annotations:\n\n\nnegative 1555 |\npositive 1435 |\nno sentiment 470 \n\nTotal 3460\n\n\nNote that all the data is uploaded under a single 'train' split (read ## Uses for further details).",
"### SST2\n\n\nWe re-annotate N=263 instances six\ntimes (for six demographic groups), which are all\nthe positive and negative instances from the Zuco\\*\ndataset of Hollenstein et al. (2018), comprising a\nmixture of train, validation and test set instances\nfrom SST-2, *which should be removed from the original SST\ndata before training any model*.\n\n\nThese 263 reannotated instances do not contain any instances originally marked as 'neutral' (or not conveying sentiment) because rationale annotation for neutral instances is ill-defined. Yet,\nwe still allow annotators to evaluate a sentence as\nneutral, since we do not want to force our annotators to provide rationales for positive and negative\nsentiment that they do not see.\n\n\n\\*The Zuco data contains eye-tracking data for 400 instances from SST. By annotating some of these with rationales,\nwe add an extra layer of information for future research.",
"#### >Our annotations:\n\n\npositive 1027 |\nnegative 900 |\nno sentiment 163 \n\nTotal 2090\n\n\nNote that all the data is uploaded under a single 'train' split (read ## Uses for further details).",
"### CoS-E\n\n\nWe use the simplified version of CoS-E released by [[6]](#6).\n\n\nWe re-annotate N=500 instances from\nthe CoS-E test set six times (for six demographic groups)\nand ask annotators to firstly select the answer to\nthe question that they find most correct and sensible, and then mark words that justifies that answer.\nFollowing [[7]](#7), we specify the\nrationale task with a wording that should guide\nannotators to make short, precise rationale annotations:\n\n\n‘For each word in the question, if you\nthink that removing it will decrease your\nconfidence toward your chosen label,\nplease mark it.’",
"#### >Our annotations:\n\n\nTotal 3760\n\n\nNote that all the data is uploaded under a single 'train' split (read ## Uses for further details).",
"### Dataset Sources\n\n\n* Repository: URL\n* Paper: Being Right for Whose Right Reasons?",
"## Uses\n\n\nIn our paper, we present a collection of three\nexisting datasets (SST2, DynaSent and Cos-E) with demographics-augmented annotations to enable profiling of models, i.e., quantifying their alignment (or agreement) with rationales provided\nby different socio-demographic groups. Such profiling enables us to ask whose right reasons models are being right for and fosters future research on performance equality/robustness.\n\n\nFor each dataset, we provide the data under a unique 'train' split due to the current limitation of not being possible to upload a dataset with a single *'test'* split.\nNote, however, that the original itended used of these collection of datasets was to test the quality & alignment of post-hoc explainability methods.\nIf you use it following different splits, please clarify it to ease reproducibility of your work.\n\n\nDataset Structure\n-----------------\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nTerne Sasha Thorn Jakobsen, Laura Cabello, Anders Søgaard. Being Right for Whose Right Reasons?\nIn the Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).",
"#### Annotation process\n\n\nWe refer to our paper for further details on the data (Section 3), and specifically on the Annotation Process (Section 3.1) and Annotator Population (Section 3.2).",
"#### Who are the annotators?\n\n\nAnnotators were recruited via Prolific and consented to the use of their responses and demographic information for research purposes.\n\n\nThe annotation tasks were conducted through Qualtrics surveys. The exact surveys can be found here.\n\n\nReferences\n----------\n\n\n[1]\nNazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain Yourself! Leveraging Language Models for Commonsense Reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932–4942, Florence, Italy. Association for Computational Linguistics.\n\n\n[2]\nChristopher Potts, Zhengxuan Wu, Atticus Geiger, and Douwe Kiela. 2021. DynaSent: A Dynamic Benchmark for Sentiment Analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2388–2404, Online. Association for Computational Linguistics.\n\n\n[3]\nRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.\n\n\n[4]\nNora Hollenstein, Jonathan Rotsztejn, Marius Troendle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. Zuco, a simultaneous eeg and eye-tracking resource for natural sentence reading. Scientific Data.\n\n\n[5]\nKate Barasz and Tami Kim. 2022. Choice perception: Making sense (and nonsense) of others’ decisions. Current opinion in psychology, 43:176–181.\n\n\n[6]\nJay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models.\n\n\n[7]\nCheng-Han Chiang and Hung-yi Lee. 2022. Reexamining human annotations for interpretable nlp.\n\n\nDataset Card Contact\n--------------------\n\n\nThanks to @lautel for adding this dataset."
] |
[
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #task_ids-open-domain-qa #annotations_creators-crowdsourced #source_datasets-extended #language-English #license-mit #bias #fairness #rationale #demographic #region-us \n",
"### DynaSent\n\n\nWe re-annotate N=480 instances\nsix times (for six demographic groups), comprising\n240 instances labeled as positive, and 240 instances\nlabeled as negative in the DynaSent Round 2 test\nset (see [[2]](#2)). This amounts to 2,880\nannotations, in total.\nTo annotate rationales, we formulate the task as\nmarking 'supporting evidence' for the label, following how the task is defined by [[6]](#6). Specifically, we ask annotators to mark\nall the words, in the sentence, they think shows\nevidence for their chosen label.",
"#### >Our annotations:\n\n\nnegative 1555 |\npositive 1435 |\nno sentiment 470 \n\nTotal 3460\n\n\nNote that all the data is uploaded under a single 'train' split (read ## Uses for further details).",
"### SST2\n\n\nWe re-annotate N=263 instances six\ntimes (for six demographic groups), which are all\nthe positive and negative instances from the Zuco\\*\ndataset of Hollenstein et al. (2018), comprising a\nmixture of train, validation and test set instances\nfrom SST-2, *which should be removed from the original SST\ndata before training any model*.\n\n\nThese 263 reannotated instances do not contain any instances originally marked as 'neutral' (or not conveying sentiment) because rationale annotation for neutral instances is ill-defined. Yet,\nwe still allow annotators to evaluate a sentence as\nneutral, since we do not want to force our annotators to provide rationales for positive and negative\nsentiment that they do not see.\n\n\n\\*The Zuco data contains eye-tracking data for 400 instances from SST. By annotating some of these with rationales,\nwe add an extra layer of information for future research.",
"#### >Our annotations:\n\n\npositive 1027 |\nnegative 900 |\nno sentiment 163 \n\nTotal 2090\n\n\nNote that all the data is uploaded under a single 'train' split (read ## Uses for further details).",
"### CoS-E\n\n\nWe use the simplified version of CoS-E released by [[6]](#6).\n\n\nWe re-annotate N=500 instances from\nthe CoS-E test set six times (for six demographic groups)\nand ask annotators to firstly select the answer to\nthe question that they find most correct and sensible, and then mark words that justifies that answer.\nFollowing [[7]](#7), we specify the\nrationale task with a wording that should guide\nannotators to make short, precise rationale annotations:\n\n\n‘For each word in the question, if you\nthink that removing it will decrease your\nconfidence toward your chosen label,\nplease mark it.’",
"#### >Our annotations:\n\n\nTotal 3760\n\n\nNote that all the data is uploaded under a single 'train' split (read ## Uses for further details).",
"### Dataset Sources\n\n\n* Repository: URL\n* Paper: Being Right for Whose Right Reasons?",
"## Uses\n\n\nIn our paper, we present a collection of three\nexisting datasets (SST2, DynaSent and Cos-E) with demographics-augmented annotations to enable profiling of models, i.e., quantifying their alignment (or agreement) with rationales provided\nby different socio-demographic groups. Such profiling enables us to ask whose right reasons models are being right for and fosters future research on performance equality/robustness.\n\n\nFor each dataset, we provide the data under a unique 'train' split due to the current limitation of not being possible to upload a dataset with a single *'test'* split.\nNote, however, that the original itended used of these collection of datasets was to test the quality & alignment of post-hoc explainability methods.\nIf you use it following different splits, please clarify it to ease reproducibility of your work.\n\n\nDataset Structure\n-----------------\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nTerne Sasha Thorn Jakobsen, Laura Cabello, Anders Søgaard. Being Right for Whose Right Reasons?\nIn the Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).",
"#### Annotation process\n\n\nWe refer to our paper for further details on the data (Section 3), and specifically on the Annotation Process (Section 3.1) and Annotator Population (Section 3.2).",
"#### Who are the annotators?\n\n\nAnnotators were recruited via Prolific and consented to the use of their responses and demographic information for research purposes.\n\n\nThe annotation tasks were conducted through Qualtrics surveys. The exact surveys can be found here.\n\n\nReferences\n----------\n\n\n[1]\nNazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain Yourself! Leveraging Language Models for Commonsense Reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932–4942, Florence, Italy. Association for Computational Linguistics.\n\n\n[2]\nChristopher Potts, Zhengxuan Wu, Atticus Geiger, and Douwe Kiela. 2021. DynaSent: A Dynamic Benchmark for Sentiment Analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2388–2404, Online. Association for Computational Linguistics.\n\n\n[3]\nRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.\n\n\n[4]\nNora Hollenstein, Jonathan Rotsztejn, Marius Troendle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. Zuco, a simultaneous eeg and eye-tracking resource for natural sentence reading. Scientific Data.\n\n\n[5]\nKate Barasz and Tami Kim. 2022. Choice perception: Making sense (and nonsense) of others’ decisions. Current opinion in psychology, 43:176–181.\n\n\n[6]\nJay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models.\n\n\n[7]\nCheng-Han Chiang and Hung-yi Lee. 2022. Reexamining human annotations for interpretable nlp.\n\n\nDataset Card Contact\n--------------------\n\n\nThanks to @lautel for adding this dataset."
] |
[
84,
139,
50,
218,
49,
153,
37,
25,
219,
65,
44,
538
] |
[
"passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #task_ids-open-domain-qa #annotations_creators-crowdsourced #source_datasets-extended #language-English #license-mit #bias #fairness #rationale #demographic #region-us \n### DynaSent\n\n\nWe re-annotate N=480 instances\nsix times (for six demographic groups), comprising\n240 instances labeled as positive, and 240 instances\nlabeled as negative in the DynaSent Round 2 test\nset (see [[2]](#2)). This amounts to 2,880\nannotations, in total.\nTo annotate rationales, we formulate the task as\nmarking 'supporting evidence' for the label, following how the task is defined by [[6]](#6). Specifically, we ask annotators to mark\nall the words, in the sentence, they think shows\nevidence for their chosen label.#### >Our annotations:\n\n\nnegative 1555 |\npositive 1435 |\nno sentiment 470 \n\nTotal 3460\n\n\nNote that all the data is uploaded under a single 'train' split (read ## Uses for further details).### SST2\n\n\nWe re-annotate N=263 instances six\ntimes (for six demographic groups), which are all\nthe positive and negative instances from the Zuco\\*\ndataset of Hollenstein et al. (2018), comprising a\nmixture of train, validation and test set instances\nfrom SST-2, *which should be removed from the original SST\ndata before training any model*.\n\n\nThese 263 reannotated instances do not contain any instances originally marked as 'neutral' (or not conveying sentiment) because rationale annotation for neutral instances is ill-defined. Yet,\nwe still allow annotators to evaluate a sentence as\nneutral, since we do not want to force our annotators to provide rationales for positive and negative\nsentiment that they do not see.\n\n\n\\*The Zuco data contains eye-tracking data for 400 instances from SST. By annotating some of these with rationales,\nwe add an extra layer of information for future research.",
"passage: #### >Our annotations:\n\n\npositive 1027 |\nnegative 900 |\nno sentiment 163 \n\nTotal 2090\n\n\nNote that all the data is uploaded under a single 'train' split (read ## Uses for further details).### CoS-E\n\n\nWe use the simplified version of CoS-E released by [[6]](#6).\n\n\nWe re-annotate N=500 instances from\nthe CoS-E test set six times (for six demographic groups)\nand ask annotators to firstly select the answer to\nthe question that they find most correct and sensible, and then mark words that justifies that answer.\nFollowing [[7]](#7), we specify the\nrationale task with a wording that should guide\nannotators to make short, precise rationale annotations:\n\n\n‘For each word in the question, if you\nthink that removing it will decrease your\nconfidence toward your chosen label,\nplease mark it.’#### >Our annotations:\n\n\nTotal 3760\n\n\nNote that all the data is uploaded under a single 'train' split (read ## Uses for further details).### Dataset Sources\n\n\n* Repository: URL\n* Paper: Being Right for Whose Right Reasons?## Uses\n\n\nIn our paper, we present a collection of three\nexisting datasets (SST2, DynaSent and Cos-E) with demographics-augmented annotations to enable profiling of models, i.e., quantifying their alignment (or agreement) with rationales provided\nby different socio-demographic groups. Such profiling enables us to ask whose right reasons models are being right for and fosters future research on performance equality/robustness.\n\n\nFor each dataset, we provide the data under a unique 'train' split due to the current limitation of not being possible to upload a dataset with a single *'test'* split.\nNote, however, that the original itended used of these collection of datasets was to test the quality & alignment of post-hoc explainability methods.\nIf you use it following different splits, please clarify it to ease reproducibility of your work.\n\n\nDataset Structure\n-----------------\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nTerne Sasha Thorn Jakobsen, Laura Cabello, Anders Søgaard. Being Right for Whose Right Reasons?\nIn the Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).",
"passage: #### Annotation process\n\n\nWe refer to our paper for further details on the data (Section 3), and specifically on the Annotation Process (Section 3.1) and Annotator Population (Section 3.2)."
] |
8164486f4f124142fe5e443993b10d4b78cbe6fe
|
A 12000 token long conversation dataset
Why did i do this?
|
NobodyExistsOnTheInternet/12000kLongConversations
|
[
"license:mit",
"region:us"
] |
2023-10-12T11:07:14+00:00
|
{"license": "mit"}
|
2023-10-12T11:08:48+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
A 12000 token long conversation dataset
Why did i do this?
|
[] |
[
"TAGS\n#license-mit #region-us \n"
] |
[
11
] |
[
"passage: TAGS\n#license-mit #region-us \n"
] |
912e7fd5855502d2cdea305a7b15333ecac433cc
|
# Dataset Card for "plmn5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
csupiisc/plmn5k
|
[
"region:us"
] |
2023-10-12T11:07:42+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 316453, "num_examples": 4000}, {"name": "test", "num_bytes": 78777, "num_examples": 1000}], "download_size": 158446, "dataset_size": 395230}}
|
2023-10-12T11:07:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "plmn5k"
More Information needed
|
[
"# Dataset Card for \"plmn5k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"plmn5k\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"plmn5k\"\n\nMore Information needed"
] |
2c9f528ccac4419a638c30070ce999601f936e78
|
# Dataset Card for "DT_data_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Asaad101/DT_data_0
|
[
"region:us"
] |
2023-10-12T11:13:23+00:00
|
{"dataset_info": {"features": [{"name": "images", "sequence": "image"}, {"name": "actions", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 14340602.0, "num_examples": 23}], "download_size": 4232, "dataset_size": 14340602.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T11:13:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "DT_data_0"
More Information needed
|
[
"# Dataset Card for \"DT_data_0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"DT_data_0\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"DT_data_0\"\n\nMore Information needed"
] |
7f2092cd1facecaf84081cb5120772365079a72a
|
# The Hermes_preference dataset
<!-- Provide a quick summary of the dataset. -->
The **Hermes_preference** dataset is the type of feedback dataset, used for training reward models which is used for RLHF!
In addition, **Hermes_preference** dataset can be also used for DPO!
We collect the preference data from several popular feedback datasets([UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback), [hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf), [rlhf-reward-datasets](https://huggingface.co/datasets/yitingxie/rlhf-reward-datasets)) through sampling and preprocessing.
As a result, we could have collected approximately 190K preference data.
To collect high-quality feedback data, we decided to collect feedback data from [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) & [rlhf-reward-datasets](https://huggingface.co/datasets/yitingxie/rlhf-reward-datasets) which are curated datasets.
In addition, we also collect the data from [hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) to accumulate the data that teach the models to output helpful and harmless response.
We hope that **Hermes_preference** dataset provides a promising way to future RLHF & DPO research!
## Dataset Details
<!-- Provide a longer summary of what this dataset is. -->
The **Hermes_preference** dataset is a mixture of several popular preference datasets(UltraFeedback, hh-rlhf, rlhf-reward-datasets) as we mentioned above.
The purpose of this dataset is to make a preference dataset that consists of more varied data.
To accomplish this purpose, we selected the UltraFeedback, hh-rlhf, and rlhf-reward-datasets as the base dataset.
More specifically, we sampled and preprocessed the datasets mentioned above to make Hermes_preference dataset more structural.
- **Curated by:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** MIT
### Source Data
The Hermes_preference dataset consists of the following datasets.
- [**openbmb/UltraFeedback**](https://huggingface.co/datasets/openbmb/UltraFeedback)
- [**Anthropic/hh-rlhf**](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [**yitingxie/rlhf-reward-datasets**](https://huggingface.co/datasets/yitingxie/rlhf-reward-datasets)
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [gauss5930/Hermes](https://github.com/gauss5930/Hermes)
- **Model** [Cartinoe5930/Hermes-7b]()
## Dataset Structure
The structure of **Hermes_prference** dataset is as follows:
```
{
"source": The source dataset of data,
"prompt": The instruction of question,
"chosen": Choosed response,
"rejected": Rejected response
}
```
|
Cartinoe5930/Hermes_preference
|
[
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"region:us"
] |
2023-10-12T11:20:06+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"]}
|
2023-10-19T10:55:36+00:00
|
[] |
[
"en"
] |
TAGS
#size_categories-100K<n<1M #language-English #license-mit #region-us
|
# The Hermes_preference dataset
The Hermes_preference dataset is the type of feedback dataset, used for training reward models which is used for RLHF!
In addition, Hermes_preference dataset can be also used for DPO!
We collect the preference data from several popular feedback datasets(UltraFeedback, hh-rlhf, rlhf-reward-datasets) through sampling and preprocessing.
As a result, we could have collected approximately 190K preference data.
To collect high-quality feedback data, we decided to collect feedback data from UltraFeedback & rlhf-reward-datasets which are curated datasets.
In addition, we also collect the data from hh-rlhf to accumulate the data that teach the models to output helpful and harmless response.
We hope that Hermes_preference dataset provides a promising way to future RLHF & DPO research!
## Dataset Details
The Hermes_preference dataset is a mixture of several popular preference datasets(UltraFeedback, hh-rlhf, rlhf-reward-datasets) as we mentioned above.
The purpose of this dataset is to make a preference dataset that consists of more varied data.
To accomplish this purpose, we selected the UltraFeedback, hh-rlhf, and rlhf-reward-datasets as the base dataset.
More specifically, we sampled and preprocessed the datasets mentioned above to make Hermes_preference dataset more structural.
- Curated by:
- Language(s) (NLP): en
- License: MIT
### Source Data
The Hermes_preference dataset consists of the following datasets.
- openbmb/UltraFeedback
- Anthropic/hh-rlhf
- yitingxie/rlhf-reward-datasets
### Dataset Sources [optional]
- Repository: gauss5930/Hermes
- Model [Cartinoe5930/Hermes-7b]()
## Dataset Structure
The structure of Hermes_prference dataset is as follows:
|
[
"# The Hermes_preference dataset\n\n\n\nThe Hermes_preference dataset is the type of feedback dataset, used for training reward models which is used for RLHF! \nIn addition, Hermes_preference dataset can be also used for DPO!\nWe collect the preference data from several popular feedback datasets(UltraFeedback, hh-rlhf, rlhf-reward-datasets) through sampling and preprocessing.\nAs a result, we could have collected approximately 190K preference data.\n\nTo collect high-quality feedback data, we decided to collect feedback data from UltraFeedback & rlhf-reward-datasets which are curated datasets.\nIn addition, we also collect the data from hh-rlhf to accumulate the data that teach the models to output helpful and harmless response.\nWe hope that Hermes_preference dataset provides a promising way to future RLHF & DPO research!",
"## Dataset Details\n\n\n\nThe Hermes_preference dataset is a mixture of several popular preference datasets(UltraFeedback, hh-rlhf, rlhf-reward-datasets) as we mentioned above.\nThe purpose of this dataset is to make a preference dataset that consists of more varied data.\nTo accomplish this purpose, we selected the UltraFeedback, hh-rlhf, and rlhf-reward-datasets as the base dataset.\nMore specifically, we sampled and preprocessed the datasets mentioned above to make Hermes_preference dataset more structural.\n\n\n\n- Curated by: \n- Language(s) (NLP): en\n- License: MIT",
"### Source Data\n\nThe Hermes_preference dataset consists of the following datasets.\n\n- openbmb/UltraFeedback\n- Anthropic/hh-rlhf\n- yitingxie/rlhf-reward-datasets",
"### Dataset Sources [optional]\n\n\n\n- Repository: gauss5930/Hermes\n- Model [Cartinoe5930/Hermes-7b]()",
"## Dataset Structure\n\nThe structure of Hermes_prference dataset is as follows:"
] |
[
"TAGS\n#size_categories-100K<n<1M #language-English #license-mit #region-us \n",
"# The Hermes_preference dataset\n\n\n\nThe Hermes_preference dataset is the type of feedback dataset, used for training reward models which is used for RLHF! \nIn addition, Hermes_preference dataset can be also used for DPO!\nWe collect the preference data from several popular feedback datasets(UltraFeedback, hh-rlhf, rlhf-reward-datasets) through sampling and preprocessing.\nAs a result, we could have collected approximately 190K preference data.\n\nTo collect high-quality feedback data, we decided to collect feedback data from UltraFeedback & rlhf-reward-datasets which are curated datasets.\nIn addition, we also collect the data from hh-rlhf to accumulate the data that teach the models to output helpful and harmless response.\nWe hope that Hermes_preference dataset provides a promising way to future RLHF & DPO research!",
"## Dataset Details\n\n\n\nThe Hermes_preference dataset is a mixture of several popular preference datasets(UltraFeedback, hh-rlhf, rlhf-reward-datasets) as we mentioned above.\nThe purpose of this dataset is to make a preference dataset that consists of more varied data.\nTo accomplish this purpose, we selected the UltraFeedback, hh-rlhf, and rlhf-reward-datasets as the base dataset.\nMore specifically, we sampled and preprocessed the datasets mentioned above to make Hermes_preference dataset more structural.\n\n\n\n- Curated by: \n- Language(s) (NLP): en\n- License: MIT",
"### Source Data\n\nThe Hermes_preference dataset consists of the following datasets.\n\n- openbmb/UltraFeedback\n- Anthropic/hh-rlhf\n- yitingxie/rlhf-reward-datasets",
"### Dataset Sources [optional]\n\n\n\n- Repository: gauss5930/Hermes\n- Model [Cartinoe5930/Hermes-7b]()",
"## Dataset Structure\n\nThe structure of Hermes_prference dataset is as follows:"
] |
[
27,
213,
160,
56,
38,
22
] |
[
"passage: TAGS\n#size_categories-100K<n<1M #language-English #license-mit #region-us \n# The Hermes_preference dataset\n\n\n\nThe Hermes_preference dataset is the type of feedback dataset, used for training reward models which is used for RLHF! \nIn addition, Hermes_preference dataset can be also used for DPO!\nWe collect the preference data from several popular feedback datasets(UltraFeedback, hh-rlhf, rlhf-reward-datasets) through sampling and preprocessing.\nAs a result, we could have collected approximately 190K preference data.\n\nTo collect high-quality feedback data, we decided to collect feedback data from UltraFeedback & rlhf-reward-datasets which are curated datasets.\nIn addition, we also collect the data from hh-rlhf to accumulate the data that teach the models to output helpful and harmless response.\nWe hope that Hermes_preference dataset provides a promising way to future RLHF & DPO research!## Dataset Details\n\n\n\nThe Hermes_preference dataset is a mixture of several popular preference datasets(UltraFeedback, hh-rlhf, rlhf-reward-datasets) as we mentioned above.\nThe purpose of this dataset is to make a preference dataset that consists of more varied data.\nTo accomplish this purpose, we selected the UltraFeedback, hh-rlhf, and rlhf-reward-datasets as the base dataset.\nMore specifically, we sampled and preprocessed the datasets mentioned above to make Hermes_preference dataset more structural.\n\n\n\n- Curated by: \n- Language(s) (NLP): en\n- License: MIT### Source Data\n\nThe Hermes_preference dataset consists of the following datasets.\n\n- openbmb/UltraFeedback\n- Anthropic/hh-rlhf\n- yitingxie/rlhf-reward-datasets### Dataset Sources [optional]\n\n\n\n- Repository: gauss5930/Hermes\n- Model [Cartinoe5930/Hermes-7b]()"
] |
48045b0d30f2104614dd3e7e073e2d54f0e6117e
|
### Description:
The knowrohit07/know_medical_dialogues_v2 dataset is a collection of conversational exchanges between patients and doctors on various medical topics. It aims to capture the intricacies, uncertainties, and questions posed by individuals regarding their health and the medical guidance provided in response.
### 🎯 Intended Use:
This dataset is crafted for training Large Language Models (LLMs) with a focus on understanding and generating medically-informed dialogue. It's ideal for LLM applications aiming to provide medical information or insights, especially for scenarios with limited access to healthcare resources.
❗ Limitations:
While this dataset includes diverse interactions, it doesn't cover every medical scenario. Models trained on this data should be viewed as an additional resource, not a substitute for professional medical consultation.
📌 Data Source:
Conversational seed tasks or exchanges were collected from anonymized patient-doctor interactions and synthetically made using GPT4.
📋 Collection Methodology:
The data was meticulously curated to ensure no personally identifiable information remained. All conversations are representative of general concerns and advice, without specific case details.
### Advantages of the Dataset:
Broad Spectrum: The dataset encompasses a wide array of medical queries and advice, making it valuable for general medical conversational AI.
Diverse Interactions: It captures everything from symptom queries to post-care instructions.
Training Potential for LLMs: Specifically tailored for fine-tuning LLMs for medical conversations, enhancing the resultant model's capability in this domain.
⚖️ Ethical and Impact Considerations:
Positive Impact: Utilizing LLMs trained on this dataset can be invaluable for healthcare professionals, especially in regions with limited medical datasets. When deployed on affordable local devices, doctors can leverage an AI-assisted tool, enhancing their consultation and decision-making processes.
Potential Risks: There's an inherent risk of the model providing guidance that may not match the latest medical guidelines or specific patient requirements. It's crucial to clarify to users that outputs from the LLM should complement professional medical opinions.
Recommendation: Encourage healthcare professionals to use this tool as an initial point of reference and not as the primary foundation for medical decisions.
|
knowrohit07/know_medical_dialogue_v2
|
[
"license:openrail",
"region:us"
] |
2023-10-12T11:22:27+00:00
|
{"license": "openrail"}
|
2023-12-18T21:57:32+00:00
|
[] |
[] |
TAGS
#license-openrail #region-us
|
### Description:
The knowrohit07/know_medical_dialogues_v2 dataset is a collection of conversational exchanges between patients and doctors on various medical topics. It aims to capture the intricacies, uncertainties, and questions posed by individuals regarding their health and the medical guidance provided in response.
### Intended Use:
This dataset is crafted for training Large Language Models (LLMs) with a focus on understanding and generating medically-informed dialogue. It's ideal for LLM applications aiming to provide medical information or insights, especially for scenarios with limited access to healthcare resources.
Limitations:
While this dataset includes diverse interactions, it doesn't cover every medical scenario. Models trained on this data should be viewed as an additional resource, not a substitute for professional medical consultation.
Data Source:
Conversational seed tasks or exchanges were collected from anonymized patient-doctor interactions and synthetically made using GPT4.
Collection Methodology:
The data was meticulously curated to ensure no personally identifiable information remained. All conversations are representative of general concerns and advice, without specific case details.
### Advantages of the Dataset:
Broad Spectrum: The dataset encompasses a wide array of medical queries and advice, making it valuable for general medical conversational AI.
Diverse Interactions: It captures everything from symptom queries to post-care instructions.
Training Potential for LLMs: Specifically tailored for fine-tuning LLMs for medical conversations, enhancing the resultant model's capability in this domain.
️ Ethical and Impact Considerations:
Positive Impact: Utilizing LLMs trained on this dataset can be invaluable for healthcare professionals, especially in regions with limited medical datasets. When deployed on affordable local devices, doctors can leverage an AI-assisted tool, enhancing their consultation and decision-making processes.
Potential Risks: There's an inherent risk of the model providing guidance that may not match the latest medical guidelines or specific patient requirements. It's crucial to clarify to users that outputs from the LLM should complement professional medical opinions.
Recommendation: Encourage healthcare professionals to use this tool as an initial point of reference and not as the primary foundation for medical decisions.
|
[
"### Description:\nThe knowrohit07/know_medical_dialogues_v2 dataset is a collection of conversational exchanges between patients and doctors on various medical topics. It aims to capture the intricacies, uncertainties, and questions posed by individuals regarding their health and the medical guidance provided in response.",
"### Intended Use:\nThis dataset is crafted for training Large Language Models (LLMs) with a focus on understanding and generating medically-informed dialogue. It's ideal for LLM applications aiming to provide medical information or insights, especially for scenarios with limited access to healthcare resources.\n\n Limitations:\nWhile this dataset includes diverse interactions, it doesn't cover every medical scenario. Models trained on this data should be viewed as an additional resource, not a substitute for professional medical consultation.\n\n Data Source:\nConversational seed tasks or exchanges were collected from anonymized patient-doctor interactions and synthetically made using GPT4.\n\n Collection Methodology:\nThe data was meticulously curated to ensure no personally identifiable information remained. All conversations are representative of general concerns and advice, without specific case details.",
"### Advantages of the Dataset:\nBroad Spectrum: The dataset encompasses a wide array of medical queries and advice, making it valuable for general medical conversational AI.\n\nDiverse Interactions: It captures everything from symptom queries to post-care instructions.\n\nTraining Potential for LLMs: Specifically tailored for fine-tuning LLMs for medical conversations, enhancing the resultant model's capability in this domain.\n\n️ Ethical and Impact Considerations:\nPositive Impact: Utilizing LLMs trained on this dataset can be invaluable for healthcare professionals, especially in regions with limited medical datasets. When deployed on affordable local devices, doctors can leverage an AI-assisted tool, enhancing their consultation and decision-making processes.\n\nPotential Risks: There's an inherent risk of the model providing guidance that may not match the latest medical guidelines or specific patient requirements. It's crucial to clarify to users that outputs from the LLM should complement professional medical opinions.\n\nRecommendation: Encourage healthcare professionals to use this tool as an initial point of reference and not as the primary foundation for medical decisions."
] |
[
"TAGS\n#license-openrail #region-us \n",
"### Description:\nThe knowrohit07/know_medical_dialogues_v2 dataset is a collection of conversational exchanges between patients and doctors on various medical topics. It aims to capture the intricacies, uncertainties, and questions posed by individuals regarding their health and the medical guidance provided in response.",
"### Intended Use:\nThis dataset is crafted for training Large Language Models (LLMs) with a focus on understanding and generating medically-informed dialogue. It's ideal for LLM applications aiming to provide medical information or insights, especially for scenarios with limited access to healthcare resources.\n\n Limitations:\nWhile this dataset includes diverse interactions, it doesn't cover every medical scenario. Models trained on this data should be viewed as an additional resource, not a substitute for professional medical consultation.\n\n Data Source:\nConversational seed tasks or exchanges were collected from anonymized patient-doctor interactions and synthetically made using GPT4.\n\n Collection Methodology:\nThe data was meticulously curated to ensure no personally identifiable information remained. All conversations are representative of general concerns and advice, without specific case details.",
"### Advantages of the Dataset:\nBroad Spectrum: The dataset encompasses a wide array of medical queries and advice, making it valuable for general medical conversational AI.\n\nDiverse Interactions: It captures everything from symptom queries to post-care instructions.\n\nTraining Potential for LLMs: Specifically tailored for fine-tuning LLMs for medical conversations, enhancing the resultant model's capability in this domain.\n\n️ Ethical and Impact Considerations:\nPositive Impact: Utilizing LLMs trained on this dataset can be invaluable for healthcare professionals, especially in regions with limited medical datasets. When deployed on affordable local devices, doctors can leverage an AI-assisted tool, enhancing their consultation and decision-making processes.\n\nPotential Risks: There's an inherent risk of the model providing guidance that may not match the latest medical guidelines or specific patient requirements. It's crucial to clarify to users that outputs from the LLM should complement professional medical opinions.\n\nRecommendation: Encourage healthcare professionals to use this tool as an initial point of reference and not as the primary foundation for medical decisions."
] |
[
12,
72,
191,
264
] |
[
"passage: TAGS\n#license-openrail #region-us \n### Description:\nThe knowrohit07/know_medical_dialogues_v2 dataset is a collection of conversational exchanges between patients and doctors on various medical topics. It aims to capture the intricacies, uncertainties, and questions posed by individuals regarding their health and the medical guidance provided in response.### Intended Use:\nThis dataset is crafted for training Large Language Models (LLMs) with a focus on understanding and generating medically-informed dialogue. It's ideal for LLM applications aiming to provide medical information or insights, especially for scenarios with limited access to healthcare resources.\n\n Limitations:\nWhile this dataset includes diverse interactions, it doesn't cover every medical scenario. Models trained on this data should be viewed as an additional resource, not a substitute for professional medical consultation.\n\n Data Source:\nConversational seed tasks or exchanges were collected from anonymized patient-doctor interactions and synthetically made using GPT4.\n\n Collection Methodology:\nThe data was meticulously curated to ensure no personally identifiable information remained. All conversations are representative of general concerns and advice, without specific case details."
] |
854ab8246e16be74fdfff9cc70f47ce1e14cffb0
|
Dataset generated with the setting:
- python projects/wiki_experts/cli_qa_creator.py e2e --model_setting=platy --n_icl=5 --sub_names=SUB_10 --num_iterations=4 --max_documents_per_subject=1000 --upload_to_hub=1
Standard prompts for response and instructions (0,0). Model setting platy.
|
ostapeno/platy_4iter_SUB_10_icl5_mD1000_prmp00
|
[
"region:us"
] |
2023-10-12T11:28:37+00:00
|
{}
|
2023-10-12T11:31:53+00:00
|
[] |
[] |
TAGS
#region-us
|
Dataset generated with the setting:
- python projects/wiki_experts/cli_qa_creator.py e2e --model_setting=platy --n_icl=5 --sub_names=SUB_10 --num_iterations=4 --max_documents_per_subject=1000 --upload_to_hub=1
Standard prompts for response and instructions (0,0). Model setting platy.
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
2899cbebe13713390dc929ed88979de169e1a42b
|
# Dataset Card for "News-sentiments"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sehyun66/News-sentiments
|
[
"region:us"
] |
2023-10-12T11:32:56+00:00
|
{"dataset_info": [{"config_name": "bertplus", "features": [{"name": "headline", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "headline_sentiment", "struct": [{"name": "postive", "dtype": "string"}, {"name": "negative", "dtype": "string"}, {"name": "neutral", "dtype": "string"}]}, {"name": "summary_sentiment", "struct": [{"name": "postive", "dtype": "string"}, {"name": "negative", "dtype": "string"}, {"name": "neutral", "dtype": "string"}]}], "splits": [{"name": "default", "num_bytes": 130253804, "num_examples": 316086}], "download_size": 73025646, "dataset_size": 130253804}, {"config_name": "debert", "features": [{"name": "headline", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "headline_sentiment", "struct": [{"name": "postive", "dtype": "string"}, {"name": "negative", "dtype": "string"}, {"name": "neutral", "dtype": "string"}]}, {"name": "summary_sentiment", "struct": [{"name": "postive", "dtype": "string"}, {"name": "negative", "dtype": "string"}, {"name": "neutral", "dtype": "string"}]}], "splits": [{"name": "default", "num_bytes": 130884482, "num_examples": 316086}], "download_size": 73648726, "dataset_size": 130884482}, {"config_name": "distill", "features": [{"name": "headline", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "headline_sentiment", "struct": [{"name": "postive", "dtype": "string"}, {"name": "negative", "dtype": "string"}, {"name": "neutral", "dtype": "string"}]}, {"name": "summary_sentiment", "struct": [{"name": "postive", "dtype": "string"}, {"name": "negative", "dtype": "string"}, {"name": "neutral", "dtype": "string"}]}], "splits": [{"name": "default", "num_bytes": 131086592, "num_examples": 316086}], "download_size": 71723929, "dataset_size": 131086592}, {"config_name": "finbert", "features": [{"name": "headline", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "headline_sentiment", "struct": [{"name": "postive", "dtype": "string"}, {"name": "negative", "dtype": "string"}, {"name": "neutral", "dtype": "string"}]}, {"name": "summary_sentiment", "struct": [{"name": "postive", "dtype": "string"}, {"name": "negative", "dtype": "string"}, {"name": "neutral", "dtype": "string"}]}], "splits": [{"name": "default", "num_bytes": 131074564, "num_examples": 316086}], "download_size": 73670360, "dataset_size": 131074564}], "configs": [{"config_name": "bertplus", "data_files": [{"split": "default", "path": "bertplus/default-*"}]}, {"config_name": "debert", "data_files": [{"split": "default", "path": "debert/default-*"}]}, {"config_name": "distill", "data_files": [{"split": "default", "path": "distill/default-*"}]}, {"config_name": "finbert", "data_files": [{"split": "default", "path": "finbert/default-*"}]}]}
|
2023-10-12T11:48:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "News-sentiments"
More Information needed
|
[
"# Dataset Card for \"News-sentiments\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"News-sentiments\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"News-sentiments\"\n\nMore Information needed"
] |
638b4e82b33e20eaf5915e3a6a113a1b386a6749
|
# Dataset Card for "Vadodara-Info-Converted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MananSantoki/Vadodara-Info-Converted
|
[
"region:us"
] |
2023-10-12T12:05:01+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 97472, "num_examples": 350}], "download_size": 38991, "dataset_size": 97472}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T12:05:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Vadodara-Info-Converted"
More Information needed
|
[
"# Dataset Card for \"Vadodara-Info-Converted\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Vadodara-Info-Converted\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Vadodara-Info-Converted\"\n\nMore Information needed"
] |
45cb5ebf2d4fdc709bd537933d3eb809fe0946fc
|
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]
|
ovi054/video-data
|
[
"region:us"
] |
2023-10-12T12:08:02+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]}
|
2023-10-12T12:23:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
|
[
"# Dataset Card for Dataset Name",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
[
6,
8,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
bf8314bb20f17802e123c92e8e00313c9edd6a2c
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 10
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 0
## inverse_template: 0
|
ostapeno/platy_icl5_maxD10_maxC1000000_prmt00_3
|
[
"region:us"
] |
2023-10-12T12:15:29+00:00
|
{}
|
2023-10-12T12:15:40+00:00
|
[] |
[] |
TAGS
#region-us
|
## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 10
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 0
## inverse_template: 0
|
[
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 10",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 0",
"## inverse_template: 0"
] |
[
"TAGS\n#region-us \n",
"## model_setting_name: platy",
"## max_context_length: 512",
"## icl_examples: 5",
"## icl_dataset_name: lukaemon/mmlu",
"## max_documents_per_subject: 10",
"## max_contexts_per_subject: 1000000",
"## icl_use_out_options: True",
"## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all",
"## subjects: SUB_10",
"## response_template: 0",
"## inverse_template: 0"
] |
[
6,
9,
10,
9,
14,
12,
14,
12,
27,
7,
7,
8
] |
[
"passage: TAGS\n#region-us \n## model_setting_name: platy## max_context_length: 512## icl_examples: 5## icl_dataset_name: lukaemon/mmlu## max_documents_per_subject: 10## max_contexts_per_subject: 1000000## icl_use_out_options: True## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all## subjects: SUB_10## response_template: 0## inverse_template: 0"
] |
25b02c0615446a52c5847bda26c0928ebfa17f7a
|
# Dataset Card for Chinese National Pentatonic Mode Dataset
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/CNPM>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** Chinese Ethnic Pentatonic Scale; Database; Music Information Retrieval; Pentatonic Therapy
### Dataset Summary
Based on the working idea of combining manual labeling with computer in the construction of World Music Database, this database collects and labels the audio of five modes (including five tones, six tones and seven tones) of "Gong, Shang, Jue, Zhi and Yu". At the same time, it makes a detailed analysis of the judgment of Chinese national pentatonic modes, and finds application scenarios and technical models, which can provide raw data for the analysis and retrieval of Chinese national music characteristics.
### Supported Tasks and Leaderboards
MIR, audio classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.zip(.wav), .csv
### Data Fields
Mode Type, Name, Performer, Album Name, National Mode Name, Tonggong System, Audio Links
### Data Splits
train
## Usage
```
from datasets import load_dataset
dataset = load_dataset("ccmusic-dabase/CNPM", split='train')
for data in dataset:
print(data)
```
## Dataset Creation
### Curation Rationale
Lack of a dataset for Chinese National Pentatonic Mode
### Source Data
#### Initial Data Collection and Normalization
Weixin Ren, Mingjin Che, Zhaowen Wang, Qinyu Li, Jiaye Hu, Fan Xia, Wei Li, Monan Zhou
#### Who are the source language producers?
Teachers & students from FD-LAMT, CCOM, SCCM
### Annotations
#### Annotation process
Based on the working idea of combining manual labeling with computer in the construction of World Music Database, this database collects and labels the audio of five modes (including five tones, six tones and seven tones) of "Gong, Shang, Jue, Zhi and Yu". At the same time, it makes a detailed analysis of the judgment of Chinese national pentatonic modes, and finds application scenarios and technical models, which can provide raw data for the analysis and retrieval of Chinese national music characteristics.
#### Who are the annotators?
Teachers & students from FD-LAMT, CCOM, SCCM
### Personal and Sensitive Information
Due to copyright reasons, only some of the audio can be released directly. This part of the audio is the Shang mode and Jue mode tracks performed by professional performers. The rest of the audio needs to be searched and downloaded by the dataset user from music platforms such as Kugou Music, NetEase Cloud Music and QQ Music, based on song titles, artists and album names.
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of music AI industry
### Discussion of Biases
Only for Traditional Chinese Instruments
### Other Known Limitations
Only for Pentatonic Mode
## Additional Information
### Dataset Curators
Weixin Ren, Mingjin Che, Zhaowen Wang, Qinyu Li, Jiaye Hu, Fan Xia, Wei Li
### Evalution
[任伟鑫,车明锦,汪照文,孟文武,李沁雨,胡佳弋,夏凡,李伟.CNPM Database:一个用于计算音乐学的中国民族五声调式数据库[J].复旦学报(自然科学版),2022,61(05):555-563.DOI:10.15943/j.cnki.fdxb-jns.20221017.008.](https://kns.cnki.net/kcms2/article/abstract?v=lD5CuVSaeOtw0E2oWliKSMrLiLDt9iwvkwoTgSclPspwUECyt4uNZ6T7DCLlfwMqohXCQXkFzf_XjAUOQ3CAkhPqNj20H8eG9UfUVuHEey0x7Kqp32fMlJiM9xuPtdVMvC1PB2qW0qI=&uniplatform=NZKPT&src=copy)
### Licensing Information
```
MIT License
Copyright (c) FD-LAMT
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu, Yuan Wang, Zhaowen Wang, Wei Li and Zijin Li},
title = {CCMUSIC DATABASE: A Music Data Sharing Platform for Computational Musicology Research},
month = {nov},
year = {2021},
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a dataset for Chinese National Pentatonic Mode
|
ccmusic-database/CNPM
|
[
"task_categories:audio-classification",
"size_categories:n<1K",
"language:zh",
"language:en",
"license:mit",
"music",
"art",
"region:us"
] |
2023-10-12T12:22:17+00:00
|
{"language": ["zh", "en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["audio-classification"], "pretty_name": "Chinese National Pentatonic Mode Dataset", "tags": ["music", "art"], "viewer": false}
|
2023-12-04T16:09:34+00:00
|
[] |
[
"zh",
"en"
] |
TAGS
#task_categories-audio-classification #size_categories-n<1K #language-Chinese #language-English #license-mit #music #art #region-us
|
# Dataset Card for Chinese National Pentatonic Mode Dataset
## Dataset Description
- Homepage: <URL>
- Repository: <URL
- Paper: <URL
- Leaderboard: <URL
- Point of Contact: Chinese Ethnic Pentatonic Scale; Database; Music Information Retrieval; Pentatonic Therapy
### Dataset Summary
Based on the working idea of combining manual labeling with computer in the construction of World Music Database, this database collects and labels the audio of five modes (including five tones, six tones and seven tones) of "Gong, Shang, Jue, Zhi and Yu". At the same time, it makes a detailed analysis of the judgment of Chinese national pentatonic modes, and finds application scenarios and technical models, which can provide raw data for the analysis and retrieval of Chinese national music characteristics.
### Supported Tasks and Leaderboards
MIR, audio classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.zip(.wav), .csv
### Data Fields
Mode Type, Name, Performer, Album Name, National Mode Name, Tonggong System, Audio Links
### Data Splits
train
## Usage
## Dataset Creation
### Curation Rationale
Lack of a dataset for Chinese National Pentatonic Mode
### Source Data
#### Initial Data Collection and Normalization
Weixin Ren, Mingjin Che, Zhaowen Wang, Qinyu Li, Jiaye Hu, Fan Xia, Wei Li, Monan Zhou
#### Who are the source language producers?
Teachers & students from FD-LAMT, CCOM, SCCM
### Annotations
#### Annotation process
Based on the working idea of combining manual labeling with computer in the construction of World Music Database, this database collects and labels the audio of five modes (including five tones, six tones and seven tones) of "Gong, Shang, Jue, Zhi and Yu". At the same time, it makes a detailed analysis of the judgment of Chinese national pentatonic modes, and finds application scenarios and technical models, which can provide raw data for the analysis and retrieval of Chinese national music characteristics.
#### Who are the annotators?
Teachers & students from FD-LAMT, CCOM, SCCM
### Personal and Sensitive Information
Due to copyright reasons, only some of the audio can be released directly. This part of the audio is the Shang mode and Jue mode tracks performed by professional performers. The rest of the audio needs to be searched and downloaded by the dataset user from music platforms such as Kugou Music, NetEase Cloud Music and QQ Music, based on song titles, artists and album names.
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of music AI industry
### Discussion of Biases
Only for Traditional Chinese Instruments
### Other Known Limitations
Only for Pentatonic Mode
## Additional Information
### Dataset Curators
Weixin Ren, Mingjin Che, Zhaowen Wang, Qinyu Li, Jiaye Hu, Fan Xia, Wei Li
### Evalution
[任伟鑫,车明锦,汪照文,孟文武,李沁雨,胡佳弋,夏凡,李伟.CNPM Database:一个用于计算音乐学的中国民族五声调式数据库[J].复旦学报(自然科学版),2022,61(05):555-563.DOI:10.15943/j.URL-jns.20221017.008.](URL
### Licensing Information
### Contributions
Provide a dataset for Chinese National Pentatonic Mode
|
[
"# Dataset Card for Chinese National Pentatonic Mode Dataset",
"## Dataset Description\n- Homepage: <URL>\n- Repository: <URL\n- Paper: <URL\n- Leaderboard: <URL\n- Point of Contact: Chinese Ethnic Pentatonic Scale; Database; Music Information Retrieval; Pentatonic Therapy",
"### Dataset Summary\nBased on the working idea of combining manual labeling with computer in the construction of World Music Database, this database collects and labels the audio of five modes (including five tones, six tones and seven tones) of \"Gong, Shang, Jue, Zhi and Yu\". At the same time, it makes a detailed analysis of the judgment of Chinese national pentatonic modes, and finds application scenarios and technical models, which can provide raw data for the analysis and retrieval of Chinese national music characteristics.",
"### Supported Tasks and Leaderboards\nMIR, audio classification",
"### Languages\nChinese, English",
"## Dataset Structure",
"### Data Instances\n.zip(.wav), .csv",
"### Data Fields\nMode Type, Name, Performer, Album Name, National Mode Name, Tonggong System, Audio Links",
"### Data Splits\ntrain",
"## Usage",
"## Dataset Creation",
"### Curation Rationale\nLack of a dataset for Chinese National Pentatonic Mode",
"### Source Data",
"#### Initial Data Collection and Normalization\nWeixin Ren, Mingjin Che, Zhaowen Wang, Qinyu Li, Jiaye Hu, Fan Xia, Wei Li, Monan Zhou",
"#### Who are the source language producers?\nTeachers & students from FD-LAMT, CCOM, SCCM",
"### Annotations",
"#### Annotation process\nBased on the working idea of combining manual labeling with computer in the construction of World Music Database, this database collects and labels the audio of five modes (including five tones, six tones and seven tones) of \"Gong, Shang, Jue, Zhi and Yu\". At the same time, it makes a detailed analysis of the judgment of Chinese national pentatonic modes, and finds application scenarios and technical models, which can provide raw data for the analysis and retrieval of Chinese national music characteristics.",
"#### Who are the annotators?\nTeachers & students from FD-LAMT, CCOM, SCCM",
"### Personal and Sensitive Information\nDue to copyright reasons, only some of the audio can be released directly. This part of the audio is the Shang mode and Jue mode tracks performed by professional performers. The rest of the audio needs to be searched and downloaded by the dataset user from music platforms such as Kugou Music, NetEase Cloud Music and QQ Music, based on song titles, artists and album names.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nPromoting the development of music AI industry",
"### Discussion of Biases\nOnly for Traditional Chinese Instruments",
"### Other Known Limitations\nOnly for Pentatonic Mode",
"## Additional Information",
"### Dataset Curators\nWeixin Ren, Mingjin Che, Zhaowen Wang, Qinyu Li, Jiaye Hu, Fan Xia, Wei Li",
"### Evalution\n[任伟鑫,车明锦,汪照文,孟文武,李沁雨,胡佳弋,夏凡,李伟.CNPM Database:一个用于计算音乐学的中国民族五声调式数据库[J].复旦学报(自然科学版),2022,61(05):555-563.DOI:10.15943/j.URL-jns.20221017.008.](URL",
"### Licensing Information",
"### Contributions\nProvide a dataset for Chinese National Pentatonic Mode"
] |
[
"TAGS\n#task_categories-audio-classification #size_categories-n<1K #language-Chinese #language-English #license-mit #music #art #region-us \n",
"# Dataset Card for Chinese National Pentatonic Mode Dataset",
"## Dataset Description\n- Homepage: <URL>\n- Repository: <URL\n- Paper: <URL\n- Leaderboard: <URL\n- Point of Contact: Chinese Ethnic Pentatonic Scale; Database; Music Information Retrieval; Pentatonic Therapy",
"### Dataset Summary\nBased on the working idea of combining manual labeling with computer in the construction of World Music Database, this database collects and labels the audio of five modes (including five tones, six tones and seven tones) of \"Gong, Shang, Jue, Zhi and Yu\". At the same time, it makes a detailed analysis of the judgment of Chinese national pentatonic modes, and finds application scenarios and technical models, which can provide raw data for the analysis and retrieval of Chinese national music characteristics.",
"### Supported Tasks and Leaderboards\nMIR, audio classification",
"### Languages\nChinese, English",
"## Dataset Structure",
"### Data Instances\n.zip(.wav), .csv",
"### Data Fields\nMode Type, Name, Performer, Album Name, National Mode Name, Tonggong System, Audio Links",
"### Data Splits\ntrain",
"## Usage",
"## Dataset Creation",
"### Curation Rationale\nLack of a dataset for Chinese National Pentatonic Mode",
"### Source Data",
"#### Initial Data Collection and Normalization\nWeixin Ren, Mingjin Che, Zhaowen Wang, Qinyu Li, Jiaye Hu, Fan Xia, Wei Li, Monan Zhou",
"#### Who are the source language producers?\nTeachers & students from FD-LAMT, CCOM, SCCM",
"### Annotations",
"#### Annotation process\nBased on the working idea of combining manual labeling with computer in the construction of World Music Database, this database collects and labels the audio of five modes (including five tones, six tones and seven tones) of \"Gong, Shang, Jue, Zhi and Yu\". At the same time, it makes a detailed analysis of the judgment of Chinese national pentatonic modes, and finds application scenarios and technical models, which can provide raw data for the analysis and retrieval of Chinese national music characteristics.",
"#### Who are the annotators?\nTeachers & students from FD-LAMT, CCOM, SCCM",
"### Personal and Sensitive Information\nDue to copyright reasons, only some of the audio can be released directly. This part of the audio is the Shang mode and Jue mode tracks performed by professional performers. The rest of the audio needs to be searched and downloaded by the dataset user from music platforms such as Kugou Music, NetEase Cloud Music and QQ Music, based on song titles, artists and album names.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nPromoting the development of music AI industry",
"### Discussion of Biases\nOnly for Traditional Chinese Instruments",
"### Other Known Limitations\nOnly for Pentatonic Mode",
"## Additional Information",
"### Dataset Curators\nWeixin Ren, Mingjin Che, Zhaowen Wang, Qinyu Li, Jiaye Hu, Fan Xia, Wei Li",
"### Evalution\n[任伟鑫,车明锦,汪照文,孟文武,李沁雨,胡佳弋,夏凡,李伟.CNPM Database:一个用于计算音乐学的中国民族五声调式数据库[J].复旦学报(自然科学版),2022,61(05):555-563.DOI:10.15943/j.URL-jns.20221017.008.](URL",
"### Licensing Information",
"### Contributions\nProvide a dataset for Chinese National Pentatonic Mode"
] |
[
46,
13,
55,
124,
16,
7,
6,
18,
26,
6,
3,
5,
20,
4,
44,
26,
5,
123,
25,
97,
8,
15,
15,
13,
5,
35,
96,
6,
16
] |
[
"passage: TAGS\n#task_categories-audio-classification #size_categories-n<1K #language-Chinese #language-English #license-mit #music #art #region-us \n# Dataset Card for Chinese National Pentatonic Mode Dataset## Dataset Description\n- Homepage: <URL>\n- Repository: <URL\n- Paper: <URL\n- Leaderboard: <URL\n- Point of Contact: Chinese Ethnic Pentatonic Scale; Database; Music Information Retrieval; Pentatonic Therapy### Dataset Summary\nBased on the working idea of combining manual labeling with computer in the construction of World Music Database, this database collects and labels the audio of five modes (including five tones, six tones and seven tones) of \"Gong, Shang, Jue, Zhi and Yu\". At the same time, it makes a detailed analysis of the judgment of Chinese national pentatonic modes, and finds application scenarios and technical models, which can provide raw data for the analysis and retrieval of Chinese national music characteristics.### Supported Tasks and Leaderboards\nMIR, audio classification### Languages\nChinese, English## Dataset Structure### Data Instances\n.zip(.wav), .csv### Data Fields\nMode Type, Name, Performer, Album Name, National Mode Name, Tonggong System, Audio Links### Data Splits\ntrain## Usage## Dataset Creation### Curation Rationale\nLack of a dataset for Chinese National Pentatonic Mode### Source Data#### Initial Data Collection and Normalization\nWeixin Ren, Mingjin Che, Zhaowen Wang, Qinyu Li, Jiaye Hu, Fan Xia, Wei Li, Monan Zhou#### Who are the source language producers?\nTeachers & students from FD-LAMT, CCOM, SCCM### Annotations"
] |
00c8b6cfdad9e2606d6d9af11183af950f6e4e64
|
# Dataset Card for "spotlight-sayakpaul-nyu_depth_v2-enrichment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
renumics/spotlight-sayakpaul-nyu_depth_v2-enrichment
|
[
"region:us"
] |
2023-10-12T12:23:39+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "image.embedding", "sequence": "float32", "length": 2}, {"name": "depth_map.embedding", "sequence": "float32", "length": 2}], "splits": [{"name": "train", "num_bytes": 761344, "num_examples": 47584}, {"name": "validation", "num_bytes": 10464, "num_examples": 654}], "download_size": 1073092, "dataset_size": 771808}}
|
2023-10-12T20:32:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "spotlight-sayakpaul-nyu_depth_v2-enrichment"
More Information needed
|
[
"# Dataset Card for \"spotlight-sayakpaul-nyu_depth_v2-enrichment\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"spotlight-sayakpaul-nyu_depth_v2-enrichment\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"spotlight-sayakpaul-nyu_depth_v2-enrichment\"\n\nMore Information needed"
] |
c814e122c13d4bfba3292c3df9b6c856f4c223a0
|
# Dataset Card for GZ_IsoTech Dataset
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/Guzheng_Tech99>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** <https://arxiv.org/abs/2209.08774>
### Dataset Summary
The Guzheng is a kind of traditional Chinese instruments with diverse playing techniques. Instrument playing techniques (IPT) play an important role in musical performance. However, most of the existing works for IPT detection show low efficiency for variable-length audio and provide no assurance in the generalization as they rely on a single sound bank for training and testing. In this study, we propose an end-to-end Guzheng playing technique detection system using Fully Convolutional Networks that can be applied to variable-length audio. Because each Guzheng playing technique is applied to a note, a dedicated onset detector is trained to divide an audio into several notes and its predictions are fused with frame-wise IPT predictions. During fusion, we add the IPT predictions frame by frame inside each note and get the IPT with the highest probability within each note as the final output of that note. We create a new dataset named GZ_IsoTech from multiple sound banks and real-world recordings for Guzheng performance analysis. Our approach achieves 87.97% in frame-level accuracy and 80.76% in note-level F1-score, outperforming existing works by a large margin, which indicates the effectiveness of our proposed method in IPT detection.
This database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. These clips cover almost all the tones in the range of guzheng and the most commonly used playing techniques in guzheng performance. According to the different playing techniques of guzheng, the clips are divided into 8 categories: Vibrato(chanyin), Upward Portamento(shanghuayin), Downward Portamento(xiahuayin), Returning Portamento(huihuayin), Glissando (guazou, huazhi), Tremolo(yaozhi), Harmonic(fanyin), Plucks(gou,da,mo,tuo…).
### Supported Tasks and Leaderboards
MIR, audio classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.zip(.flac, .csv)
### Data Fields
This database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. These clips cover almost all the tones in the range of guzheng and the most commonly used playing techniques in guzheng performance. According to the different playing techniques of guzheng, the clips are divided into 8 categories: Vibrato(chanyin), Upward Portamento(shanghuayin), Downward Portamento(xiahuayin), Returning Portamento(huihuayin), Glissando (guazou, huazhi), Tremolo(yaozhi), Harmonic(fanyin), Plucks(gou,da,mo,tuo…).
### Data Splits
train, valid, test
## Dataset Creation
### Curation Rationale
The Guzheng is a kind of traditional Chinese instruments with diverse playing techniques. Instrument playing techniques (IPT) play an important role in musical performance. However, most of the existing works for IPT detection show low efficiency for variable-length audio and provide no assurance in the generalization as they rely on a single sound bank for training and testing. In this study, we propose an end-to-end Guzheng playing technique detection system using Fully Convolutional Networks that can be applied to variable-length audio. Because each Guzheng playing technique is applied to a note, a dedicated onset detector is trained to divide an audio into several notes and its predictions are fused with frame-wise IPT predictions. During fusion, we add the IPT predictions frame by frame inside each note and get the IPT with the highest probability within each note as the final output of that note. We create a new dataset named GZ_IsoTech from multiple sound banks and real-world recordings for Guzheng performance analysis. Our approach achieves 87.97% in frame-level accuracy and 80.76% in note-level F1-score, outperforming existing works by a large margin, which indicates the effectiveness of our proposed method in IPT detection.
### Source Data
#### Initial Data Collection and Normalization
Dichucheng Li, Monan Zhou
#### Who are the source language producers?
Students from FD-LAMT
### Annotations
#### Annotation process
This database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. These clips cover almost all the tones in the range of guzheng and the most commonly used playing techniques in guzheng performance. According to the different playing techniques of guzheng, the clips are divided into 8 categories: Vibrato(chanyin), Upward Portamento(shanghuayin), Downward Portamento(xiahuayin), Returning Portamento(huihuayin), Glissando (guazou, huazhi), Tremolo(yaozhi), Harmonic(fanyin), Plucks(gou,da,mo,tuo…).
#### Who are the annotators?
Students from FD-LAMT
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of music AI industry
### Discussion of Biases
Only for Traditional Chinese Instruments
### Other Known Limitations
Insufficient sample
## Additional Information
### Dataset Curators
Dichucheng Li
### Evaluation
[Li, Dichucheng, Yulun Wu, Qinyu Li, Jiahao Zhao, Yi Yu, Fan Xia and Wei Li. “Playing Technique Detection by Fusing Note Onset Information in Guzheng Performance.” International Society for Music Information Retrieval Conference (2022).](https://archives.ismir.net/ismir2022/paper/000037.pdf)
### Licensing Information
```
MIT License
Copyright (c) FD-LAMT
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu, Yuan Wang, Zhaowen Wang, Wei Li and Zijin Li},
title = {CCMUSIC DATABASE: A Music Data Sharing Platform for Computational Musicology Research},
month = {nov},
year = {2021},
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Promoting the development of music AI industry
|
ccmusic-database/GZ_IsoTech
|
[
"task_categories:audio-classification",
"size_categories:n<1K",
"language:zh",
"language:en",
"license:mit",
"music",
"art",
"arxiv:2209.08774",
"region:us"
] |
2023-10-12T12:23:57+00:00
|
{"language": ["zh", "en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["audio-classification"], "pretty_name": "GZ_IsoTech Dataset", "tags": ["music", "art"]}
|
2023-12-04T16:08:36+00:00
|
[
"2209.08774"
] |
[
"zh",
"en"
] |
TAGS
#task_categories-audio-classification #size_categories-n<1K #language-Chinese #language-English #license-mit #music #art #arxiv-2209.08774 #region-us
|
# Dataset Card for GZ_IsoTech Dataset
## Dataset Description
- Homepage: <URL>
- Repository: <URL
- Paper: <URL
- Leaderboard: <URL
- Point of Contact: <URL
### Dataset Summary
The Guzheng is a kind of traditional Chinese instruments with diverse playing techniques. Instrument playing techniques (IPT) play an important role in musical performance. However, most of the existing works for IPT detection show low efficiency for variable-length audio and provide no assurance in the generalization as they rely on a single sound bank for training and testing. In this study, we propose an end-to-end Guzheng playing technique detection system using Fully Convolutional Networks that can be applied to variable-length audio. Because each Guzheng playing technique is applied to a note, a dedicated onset detector is trained to divide an audio into several notes and its predictions are fused with frame-wise IPT predictions. During fusion, we add the IPT predictions frame by frame inside each note and get the IPT with the highest probability within each note as the final output of that note. We create a new dataset named GZ_IsoTech from multiple sound banks and real-world recordings for Guzheng performance analysis. Our approach achieves 87.97% in frame-level accuracy and 80.76% in note-level F1-score, outperforming existing works by a large margin, which indicates the effectiveness of our proposed method in IPT detection.
This database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. These clips cover almost all the tones in the range of guzheng and the most commonly used playing techniques in guzheng performance. According to the different playing techniques of guzheng, the clips are divided into 8 categories: Vibrato(chanyin), Upward Portamento(shanghuayin), Downward Portamento(xiahuayin), Returning Portamento(huihuayin), Glissando (guazou, huazhi), Tremolo(yaozhi), Harmonic(fanyin), Plucks(gou,da,mo,tuo…).
### Supported Tasks and Leaderboards
MIR, audio classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.zip(.flac, .csv)
### Data Fields
This database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. These clips cover almost all the tones in the range of guzheng and the most commonly used playing techniques in guzheng performance. According to the different playing techniques of guzheng, the clips are divided into 8 categories: Vibrato(chanyin), Upward Portamento(shanghuayin), Downward Portamento(xiahuayin), Returning Portamento(huihuayin), Glissando (guazou, huazhi), Tremolo(yaozhi), Harmonic(fanyin), Plucks(gou,da,mo,tuo…).
### Data Splits
train, valid, test
## Dataset Creation
### Curation Rationale
The Guzheng is a kind of traditional Chinese instruments with diverse playing techniques. Instrument playing techniques (IPT) play an important role in musical performance. However, most of the existing works for IPT detection show low efficiency for variable-length audio and provide no assurance in the generalization as they rely on a single sound bank for training and testing. In this study, we propose an end-to-end Guzheng playing technique detection system using Fully Convolutional Networks that can be applied to variable-length audio. Because each Guzheng playing technique is applied to a note, a dedicated onset detector is trained to divide an audio into several notes and its predictions are fused with frame-wise IPT predictions. During fusion, we add the IPT predictions frame by frame inside each note and get the IPT with the highest probability within each note as the final output of that note. We create a new dataset named GZ_IsoTech from multiple sound banks and real-world recordings for Guzheng performance analysis. Our approach achieves 87.97% in frame-level accuracy and 80.76% in note-level F1-score, outperforming existing works by a large margin, which indicates the effectiveness of our proposed method in IPT detection.
### Source Data
#### Initial Data Collection and Normalization
Dichucheng Li, Monan Zhou
#### Who are the source language producers?
Students from FD-LAMT
### Annotations
#### Annotation process
This database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. These clips cover almost all the tones in the range of guzheng and the most commonly used playing techniques in guzheng performance. According to the different playing techniques of guzheng, the clips are divided into 8 categories: Vibrato(chanyin), Upward Portamento(shanghuayin), Downward Portamento(xiahuayin), Returning Portamento(huihuayin), Glissando (guazou, huazhi), Tremolo(yaozhi), Harmonic(fanyin), Plucks(gou,da,mo,tuo…).
#### Who are the annotators?
Students from FD-LAMT
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of music AI industry
### Discussion of Biases
Only for Traditional Chinese Instruments
### Other Known Limitations
Insufficient sample
## Additional Information
### Dataset Curators
Dichucheng Li
### Evaluation
Li, Dichucheng, Yulun Wu, Qinyu Li, Jiahao Zhao, Yi Yu, Fan Xia and Wei Li. “Playing Technique Detection by Fusing Note Onset Information in Guzheng Performance.” International Society for Music Information Retrieval Conference (2022).
### Licensing Information
### Contributions
Promoting the development of music AI industry
|
[
"# Dataset Card for GZ_IsoTech Dataset",
"## Dataset Description\n- Homepage: <URL>\n- Repository: <URL\n- Paper: <URL\n- Leaderboard: <URL\n- Point of Contact: <URL",
"### Dataset Summary\nThe Guzheng is a kind of traditional Chinese instruments with diverse playing techniques. Instrument playing techniques (IPT) play an important role in musical performance. However, most of the existing works for IPT detection show low efficiency for variable-length audio and provide no assurance in the generalization as they rely on a single sound bank for training and testing. In this study, we propose an end-to-end Guzheng playing technique detection system using Fully Convolutional Networks that can be applied to variable-length audio. Because each Guzheng playing technique is applied to a note, a dedicated onset detector is trained to divide an audio into several notes and its predictions are fused with frame-wise IPT predictions. During fusion, we add the IPT predictions frame by frame inside each note and get the IPT with the highest probability within each note as the final output of that note. We create a new dataset named GZ_IsoTech from multiple sound banks and real-world recordings for Guzheng performance analysis. Our approach achieves 87.97% in frame-level accuracy and 80.76% in note-level F1-score, outperforming existing works by a large margin, which indicates the effectiveness of our proposed method in IPT detection.\n\nThis database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. These clips cover almost all the tones in the range of guzheng and the most commonly used playing techniques in guzheng performance. According to the different playing techniques of guzheng, the clips are divided into 8 categories: Vibrato(chanyin), Upward Portamento(shanghuayin), Downward Portamento(xiahuayin), Returning Portamento(huihuayin), Glissando (guazou, huazhi), Tremolo(yaozhi), Harmonic(fanyin), Plucks(gou,da,mo,tuo…).",
"### Supported Tasks and Leaderboards\nMIR, audio classification",
"### Languages\nChinese, English",
"## Dataset Structure",
"### Data Instances\n.zip(.flac, .csv)",
"### Data Fields\nThis database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. These clips cover almost all the tones in the range of guzheng and the most commonly used playing techniques in guzheng performance. According to the different playing techniques of guzheng, the clips are divided into 8 categories: Vibrato(chanyin), Upward Portamento(shanghuayin), Downward Portamento(xiahuayin), Returning Portamento(huihuayin), Glissando (guazou, huazhi), Tremolo(yaozhi), Harmonic(fanyin), Plucks(gou,da,mo,tuo…).",
"### Data Splits\ntrain, valid, test",
"## Dataset Creation",
"### Curation Rationale\nThe Guzheng is a kind of traditional Chinese instruments with diverse playing techniques. Instrument playing techniques (IPT) play an important role in musical performance. However, most of the existing works for IPT detection show low efficiency for variable-length audio and provide no assurance in the generalization as they rely on a single sound bank for training and testing. In this study, we propose an end-to-end Guzheng playing technique detection system using Fully Convolutional Networks that can be applied to variable-length audio. Because each Guzheng playing technique is applied to a note, a dedicated onset detector is trained to divide an audio into several notes and its predictions are fused with frame-wise IPT predictions. During fusion, we add the IPT predictions frame by frame inside each note and get the IPT with the highest probability within each note as the final output of that note. We create a new dataset named GZ_IsoTech from multiple sound banks and real-world recordings for Guzheng performance analysis. Our approach achieves 87.97% in frame-level accuracy and 80.76% in note-level F1-score, outperforming existing works by a large margin, which indicates the effectiveness of our proposed method in IPT detection.",
"### Source Data",
"#### Initial Data Collection and Normalization\nDichucheng Li, Monan Zhou",
"#### Who are the source language producers?\nStudents from FD-LAMT",
"### Annotations",
"#### Annotation process\nThis database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. These clips cover almost all the tones in the range of guzheng and the most commonly used playing techniques in guzheng performance. According to the different playing techniques of guzheng, the clips are divided into 8 categories: Vibrato(chanyin), Upward Portamento(shanghuayin), Downward Portamento(xiahuayin), Returning Portamento(huihuayin), Glissando (guazou, huazhi), Tremolo(yaozhi), Harmonic(fanyin), Plucks(gou,da,mo,tuo…).",
"#### Who are the annotators?\nStudents from FD-LAMT",
"### Personal and Sensitive Information\nNone",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nPromoting the development of music AI industry",
"### Discussion of Biases\nOnly for Traditional Chinese Instruments",
"### Other Known Limitations\nInsufficient sample",
"## Additional Information",
"### Dataset Curators\nDichucheng Li",
"### Evaluation\nLi, Dichucheng, Yulun Wu, Qinyu Li, Jiahao Zhao, Yi Yu, Fan Xia and Wei Li. “Playing Technique Detection by Fusing Note Onset Information in Guzheng Performance.” International Society for Music Information Retrieval Conference (2022).",
"### Licensing Information",
"### Contributions\nPromoting the development of music AI industry"
] |
[
"TAGS\n#task_categories-audio-classification #size_categories-n<1K #language-Chinese #language-English #license-mit #music #art #arxiv-2209.08774 #region-us \n",
"# Dataset Card for GZ_IsoTech Dataset",
"## Dataset Description\n- Homepage: <URL>\n- Repository: <URL\n- Paper: <URL\n- Leaderboard: <URL\n- Point of Contact: <URL",
"### Dataset Summary\nThe Guzheng is a kind of traditional Chinese instruments with diverse playing techniques. Instrument playing techniques (IPT) play an important role in musical performance. However, most of the existing works for IPT detection show low efficiency for variable-length audio and provide no assurance in the generalization as they rely on a single sound bank for training and testing. In this study, we propose an end-to-end Guzheng playing technique detection system using Fully Convolutional Networks that can be applied to variable-length audio. Because each Guzheng playing technique is applied to a note, a dedicated onset detector is trained to divide an audio into several notes and its predictions are fused with frame-wise IPT predictions. During fusion, we add the IPT predictions frame by frame inside each note and get the IPT with the highest probability within each note as the final output of that note. We create a new dataset named GZ_IsoTech from multiple sound banks and real-world recordings for Guzheng performance analysis. Our approach achieves 87.97% in frame-level accuracy and 80.76% in note-level F1-score, outperforming existing works by a large margin, which indicates the effectiveness of our proposed method in IPT detection.\n\nThis database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. These clips cover almost all the tones in the range of guzheng and the most commonly used playing techniques in guzheng performance. According to the different playing techniques of guzheng, the clips are divided into 8 categories: Vibrato(chanyin), Upward Portamento(shanghuayin), Downward Portamento(xiahuayin), Returning Portamento(huihuayin), Glissando (guazou, huazhi), Tremolo(yaozhi), Harmonic(fanyin), Plucks(gou,da,mo,tuo…).",
"### Supported Tasks and Leaderboards\nMIR, audio classification",
"### Languages\nChinese, English",
"## Dataset Structure",
"### Data Instances\n.zip(.flac, .csv)",
"### Data Fields\nThis database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. These clips cover almost all the tones in the range of guzheng and the most commonly used playing techniques in guzheng performance. According to the different playing techniques of guzheng, the clips are divided into 8 categories: Vibrato(chanyin), Upward Portamento(shanghuayin), Downward Portamento(xiahuayin), Returning Portamento(huihuayin), Glissando (guazou, huazhi), Tremolo(yaozhi), Harmonic(fanyin), Plucks(gou,da,mo,tuo…).",
"### Data Splits\ntrain, valid, test",
"## Dataset Creation",
"### Curation Rationale\nThe Guzheng is a kind of traditional Chinese instruments with diverse playing techniques. Instrument playing techniques (IPT) play an important role in musical performance. However, most of the existing works for IPT detection show low efficiency for variable-length audio and provide no assurance in the generalization as they rely on a single sound bank for training and testing. In this study, we propose an end-to-end Guzheng playing technique detection system using Fully Convolutional Networks that can be applied to variable-length audio. Because each Guzheng playing technique is applied to a note, a dedicated onset detector is trained to divide an audio into several notes and its predictions are fused with frame-wise IPT predictions. During fusion, we add the IPT predictions frame by frame inside each note and get the IPT with the highest probability within each note as the final output of that note. We create a new dataset named GZ_IsoTech from multiple sound banks and real-world recordings for Guzheng performance analysis. Our approach achieves 87.97% in frame-level accuracy and 80.76% in note-level F1-score, outperforming existing works by a large margin, which indicates the effectiveness of our proposed method in IPT detection.",
"### Source Data",
"#### Initial Data Collection and Normalization\nDichucheng Li, Monan Zhou",
"#### Who are the source language producers?\nStudents from FD-LAMT",
"### Annotations",
"#### Annotation process\nThis database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. These clips cover almost all the tones in the range of guzheng and the most commonly used playing techniques in guzheng performance. According to the different playing techniques of guzheng, the clips are divided into 8 categories: Vibrato(chanyin), Upward Portamento(shanghuayin), Downward Portamento(xiahuayin), Returning Portamento(huihuayin), Glissando (guazou, huazhi), Tremolo(yaozhi), Harmonic(fanyin), Plucks(gou,da,mo,tuo…).",
"#### Who are the annotators?\nStudents from FD-LAMT",
"### Personal and Sensitive Information\nNone",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nPromoting the development of music AI industry",
"### Discussion of Biases\nOnly for Traditional Chinese Instruments",
"### Other Known Limitations\nInsufficient sample",
"## Additional Information",
"### Dataset Curators\nDichucheng Li",
"### Evaluation\nLi, Dichucheng, Yulun Wu, Qinyu Li, Jiahao Zhao, Yi Yu, Fan Xia and Wei Li. “Playing Technique Detection by Fusing Note Onset Information in Guzheng Performance.” International Society for Music Information Retrieval Conference (2022).",
"### Licensing Information",
"### Contributions\nPromoting the development of music AI industry"
] |
[
54,
13,
35,
458,
16,
7,
6,
19,
177,
10,
5,
287,
4,
19,
18,
5,
177,
17,
10,
8,
15,
15,
10,
5,
10,
67,
6,
13
] |
[
"passage: TAGS\n#task_categories-audio-classification #size_categories-n<1K #language-Chinese #language-English #license-mit #music #art #arxiv-2209.08774 #region-us \n# Dataset Card for GZ_IsoTech Dataset## Dataset Description\n- Homepage: <URL>\n- Repository: <URL\n- Paper: <URL\n- Leaderboard: <URL\n- Point of Contact: <URL",
"passage: ### Dataset Summary\nThe Guzheng is a kind of traditional Chinese instruments with diverse playing techniques. Instrument playing techniques (IPT) play an important role in musical performance. However, most of the existing works for IPT detection show low efficiency for variable-length audio and provide no assurance in the generalization as they rely on a single sound bank for training and testing. In this study, we propose an end-to-end Guzheng playing technique detection system using Fully Convolutional Networks that can be applied to variable-length audio. Because each Guzheng playing technique is applied to a note, a dedicated onset detector is trained to divide an audio into several notes and its predictions are fused with frame-wise IPT predictions. During fusion, we add the IPT predictions frame by frame inside each note and get the IPT with the highest probability within each note as the final output of that note. We create a new dataset named GZ_IsoTech from multiple sound banks and real-world recordings for Guzheng performance analysis. Our approach achieves 87.97% in frame-level accuracy and 80.76% in note-level F1-score, outperforming existing works by a large margin, which indicates the effectiveness of our proposed method in IPT detection.\n\nThis database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. These clips cover almost all the tones in the range of guzheng and the most commonly used playing techniques in guzheng performance. According to the different playing techniques of guzheng, the clips are divided into 8 categories: Vibrato(chanyin), Upward Portamento(shanghuayin), Downward Portamento(xiahuayin), Returning Portamento(huihuayin), Glissando (guazou, huazhi), Tremolo(yaozhi), Harmonic(fanyin), Plucks(gou,da,mo,tuo…).### Supported Tasks and Leaderboards\nMIR, audio classification### Languages\nChinese, English## Dataset Structure### Data Instances\n.zip(.flac, .csv)### Data Fields\nThis database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. These clips cover almost all the tones in the range of guzheng and the most commonly used playing techniques in guzheng performance. According to the different playing techniques of guzheng, the clips are divided into 8 categories: Vibrato(chanyin), Upward Portamento(shanghuayin), Downward Portamento(xiahuayin), Returning Portamento(huihuayin), Glissando (guazou, huazhi), Tremolo(yaozhi), Harmonic(fanyin), Plucks(gou,da,mo,tuo…).### Data Splits\ntrain, valid, test## Dataset Creation"
] |
7e80cab0f2ea0b56098e7b133703d95348f3f27c
|
# Dataset Card for "xlmr_int_hard_trn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/xlmr_int_hard_trn
|
[
"region:us"
] |
2023-10-12T12:28:44+00:00
|
{"dataset_info": {"features": [{"name": "domain_label", "dtype": "int64"}, {"name": "pass_label", "dtype": "int64"}, {"name": "input", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 142739369, "num_examples": 113100}], "download_size": 40732989, "dataset_size": 142739369}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T12:28:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xlmr_int_hard_trn"
More Information needed
|
[
"# Dataset Card for \"xlmr_int_hard_trn\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xlmr_int_hard_trn\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xlmr_int_hard_trn\"\n\nMore Information needed"
] |
712bc3c5483976ab2fc13a4b5a73131605fbab06
|
# PuoData: A curated corpora for Setswana
[](https://arxiv.org/abs/2310.09141)
We believe that PuoData is a valuable resource for the Setswana language community. We hope that PuoData will be used to develop new and innovative applications that benefit the Setswana-speaking community.
Give Feedback 📑: [DSFSI Resource Feedback Form](https://docs.google.com/forms/d/e/1FAIpQLSf7S36dyAUPx2egmXbFpnTBuzoRulhL5Elu-N1eoMhaO7v10w/formResponse)
## Dataset Curation
| Dataset Name | Kind | Num. of Tokens |
|---|---|---|
| *PuoData* | | |
| NCHLT Setswana \cite{eiselen2014developing} | Government Documents | 1,010,147 |
| Nalibali Setswana | Childrens Books | 57,654 |
| Setswana Bible | Book(s) | 879,630 |
| SA Constitution | Official Document | 56,194 |
| Leipzig Setswana Corpus BW | Curated Dataset | 219,149 |
| Leipzig Setswana Corpus ZA | Curated Dataset | 218,037 |
| SABC Dikgang tsa Setswana FB (Facebook) | News Headlines | 167,119 |
| SABC MotswedingFM FB | Online Content | 33,092 |
| Leipzig Setswana Wiki | Online Content | 230,333 |
| Setswana Wiki | Online Content | 183,168 |
| Vukuzenzele Monolingual TSN | Government News | 157,798 |
| gov-za Cabinet speeches TSN | Government Speeches | 591,920 |
| Department Basic Education TSN | Education Material | 708,965 |
| **PuoData Total** | 25MB on disk | **4,513,206** |
| *PuoData+JW300* | | |
| JW300 Setswana| Book(s) | 19,782,122 |
| **PuoData+JW300** | 124MB on disk | **24,295,328** |
## Dataset Uses
We used this corpus to train [PuoBERTa](https://github.com/dsfsi/PuoBERTa), 🤗 [https://huggingface.co/dsfsi/PuoBERTa](https://huggingface.co/dsfsi/PuoBERTa). It is also part of the corpus used for [PuoBERTaJW300](https://huggingface.co/dsfsi/PuoBERTaJW300).
## Citation Information
Bibtex Reference
```
@inproceedings{marivate2023puoberta,
title = {PuoBERTa: Training and evaluation of a curated language model for Setswana},
author = {Vukosi Marivate and Moseli Mots'Oehli and Valencia Wagner and Richard Lastrucci and Isheanesu Dzingirai},
year = {2023},
booktitle= {Artificial Intelligence Research. SACAIR 2023. Communications in Computer and Information Science},
url= {https://link.springer.com/chapter/10.1007/978-3-031-49002-6_17},
keywords = {NLP},
preprint_url = {https://arxiv.org/abs/2310.09141},
dataset_url = {https://github.com/dsfsi/PuoBERTa},
software_url = {https://huggingface.co/dsfsi/PuoBERTa}
}
```
## License
The license of PuoData is in CC-BY-SA-4.0. the monolingual data have difference licenses depending on the news website license
* License for Data - [CC-BY-SA-4.0](LICENSE)
## Dataset Contact
For more details, reach out or check our [website](https://dsfsi.github.io/).
Email: [email protected]
**Enjoy exploring Setswana through AI!**
|
dsfsi/PuoData
|
[
"size_categories:1M<n<10M",
"language:tn",
"license:cc-by-sa-4.0",
"setswana",
"corpora",
"arxiv:2310.09141",
"region:us"
] |
2023-10-12T12:41:32+00:00
|
{"language": ["tn"], "license": "cc-by-sa-4.0", "size_categories": ["1M<n<10M"], "pretty_name": "puodata", "tags": ["setswana", "corpora"]}
|
2023-12-04T19:07:12+00:00
|
[
"2310.09141"
] |
[
"tn"
] |
TAGS
#size_categories-1M<n<10M #language-Tswana #license-cc-by-sa-4.0 #setswana #corpora #arxiv-2310.09141 #region-us
|
PuoData: A curated corpora for Setswana
=======================================
, Num. of Tokens: 879,630
Dataset Name: SA Constitution, Kind: Official Document, Num. of Tokens: 56,194
Dataset Name: Leipzig Setswana Corpus BW, Kind: Curated Dataset, Num. of Tokens: 219,149
Dataset Name: Leipzig Setswana Corpus ZA, Kind: Curated Dataset, Num. of Tokens: 218,037
Dataset Name: SABC Dikgang tsa Setswana FB (Facebook), Kind: News Headlines, Num. of Tokens: 167,119
Dataset Name: SABC MotswedingFM FB, Kind: Online Content, Num. of Tokens: 33,092
Dataset Name: Leipzig Setswana Wiki, Kind: Online Content, Num. of Tokens: 230,333
Dataset Name: Setswana Wiki, Kind: Online Content, Num. of Tokens: 183,168
Dataset Name: Vukuzenzele Monolingual TSN, Kind: Government News, Num. of Tokens: 157,798
Dataset Name: gov-za Cabinet speeches TSN, Kind: Government Speeches, Num. of Tokens: 591,920
Dataset Name: Department Basic Education TSN, Kind: Education Material, Num. of Tokens: 708,965
Dataset Name: PuoData Total, Kind: 25MB on disk, Num. of Tokens: 4,513,206
Dataset Name: *PuoData+JW300*, Kind: , Num. of Tokens:
Dataset Name: JW300 Setswana, Kind: Book(s), Num. of Tokens: 19,782,122
Dataset Name: PuoData+JW300, Kind: 124MB on disk, Num. of Tokens: 24,295,328
Dataset Uses
------------
We used this corpus to train PuoBERTa, URL It is also part of the corpus used for PuoBERTaJW300.
Bibtex Reference
License
-------
The license of PuoData is in CC-BY-SA-4.0. the monolingual data have difference licenses depending on the news website license
* License for Data - CC-BY-SA-4.0
Dataset Contact
---------------
For more details, reach out or check our website.
Email: vukosi.marivate@URL
Enjoy exploring Setswana through AI!
|
[] |
[
"TAGS\n#size_categories-1M<n<10M #language-Tswana #license-cc-by-sa-4.0 #setswana #corpora #arxiv-2310.09141 #region-us \n"
] |
[
50
] |
[
"passage: TAGS\n#size_categories-1M<n<10M #language-Tswana #license-cc-by-sa-4.0 #setswana #corpora #arxiv-2310.09141 #region-us \n"
] |
c77b222218cd41c961a19515c3f28844816a1839
|
# Dataset Card for Guzheng Technique 99 Dataset
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/Guzheng_Tech99>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** <https://github.com/LiDCC/GuzhengTech99/tree/windows>
### Dataset Summary
Instrument playing technique (IPT) is a key element of musical presentation.
Guzheng is a polyphonic instrument. In Guzheng performance, notes with different IPTs are usually overlapped and mixed IPTs that can be decomposed into multiple independent IPTs are usually used. Most existing work on IPT detection typically uses datasets with monophonic instrumental solo pieces. This dataset fills a gap in the research field.
The dataset comprises 99 Guzheng solo compositions, recorded by professionals in a studio, totaling 9064.6 seconds. It includes seven playing techniques labeled for each note (onset, offset, pitch, vibrato, point note, upward portamento, downward portamento, plucks, glissando, and tremolo), resulting in 63,352 annotated labels. The dataset is divided into 79, 10, and 10 songs for the training, validation, and test sets, respectively.
### Supported Tasks and Leaderboards
MIR, audio classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.zip(.flac, .csv)
### Data Fields
The dataset comprises 99 Guzheng solo compositions, recorded by professionals in a studio, totaling 9064.6 seconds. It includes seven playing techniques labeled for each note (onset, offset, pitch, vibrato, point note, upward portamento, downward portamento, plucks, glissando, and tremolo), resulting in 63,352 annotated labels. The dataset is divided into 79, 10, and 10 songs for the training, validation, and test sets, respectively.
### Data Splits
train, valid, test
## Dataset Creation
### Curation Rationale
Instrument playing technique (IPT) is a key element of musical presentation.
### Source Data
#### Initial Data Collection and Normalization
Dichucheng Li, Monan Zhou
#### Who are the source language producers?
Students from FD-LAMT
### Annotations
#### Annotation process
Guzheng is a polyphonic instrument. In Guzheng performance, notes with different IPTs are usually overlapped and mixed IPTs that can be decomposed into multiple independent IPTs are usually used. Most existing work on IPT detection typically uses datasets with monophonic instrumental solo pieces. This dataset fills a gap in the research field.
#### Who are the annotators?
Students from FD-LAMT
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of music AI industry
### Discussion of Biases
Only for Traditional Chinese Instruments
### Other Known Limitations
Insufficient sample
## Additional Information
### Dataset Curators
Dichucheng Li
### Evaluation
[Dichucheng Li, Mingjin Che, Wenwu Meng, Yulun Wu, Yi Yu, Fan Xia and Wei Li. "Frame-Level Multi-Label Playing Technique Detection Using Multi-Scale Network and Self-Attention Mechanism", in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023).](https://arxiv.org/pdf/2303.13272.pdf)
### Licensing Information
```
MIT License
Copyright (c) FD-LAMT
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu, Yuan Wang, Zhaowen Wang, Wei Li and Zijin Li},
title = {CCMUSIC DATABASE: A Music Data Sharing Platform for Computational Musicology Research},
month = {nov},
year = {2021},
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Promoting the development of music AI industry
|
ccmusic-database/Guzheng_Tech99
|
[
"task_categories:audio-classification",
"size_categories:n<1K",
"language:zh",
"language:en",
"license:mit",
"music",
"art",
"arxiv:2303.13272",
"region:us"
] |
2023-10-12T12:49:12+00:00
|
{"language": ["zh", "en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["audio-classification"], "pretty_name": "Guzheng Technique 99 Dataset", "tags": ["music", "art"], "viewer": false}
|
2023-12-04T16:08:25+00:00
|
[
"2303.13272"
] |
[
"zh",
"en"
] |
TAGS
#task_categories-audio-classification #size_categories-n<1K #language-Chinese #language-English #license-mit #music #art #arxiv-2303.13272 #region-us
|
# Dataset Card for Guzheng Technique 99 Dataset
## Dataset Description
- Homepage: <URL>
- Repository: <URL
- Paper: <URL
- Leaderboard: <URL
- Point of Contact: <URL
### Dataset Summary
Instrument playing technique (IPT) is a key element of musical presentation.
Guzheng is a polyphonic instrument. In Guzheng performance, notes with different IPTs are usually overlapped and mixed IPTs that can be decomposed into multiple independent IPTs are usually used. Most existing work on IPT detection typically uses datasets with monophonic instrumental solo pieces. This dataset fills a gap in the research field.
The dataset comprises 99 Guzheng solo compositions, recorded by professionals in a studio, totaling 9064.6 seconds. It includes seven playing techniques labeled for each note (onset, offset, pitch, vibrato, point note, upward portamento, downward portamento, plucks, glissando, and tremolo), resulting in 63,352 annotated labels. The dataset is divided into 79, 10, and 10 songs for the training, validation, and test sets, respectively.
### Supported Tasks and Leaderboards
MIR, audio classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.zip(.flac, .csv)
### Data Fields
The dataset comprises 99 Guzheng solo compositions, recorded by professionals in a studio, totaling 9064.6 seconds. It includes seven playing techniques labeled for each note (onset, offset, pitch, vibrato, point note, upward portamento, downward portamento, plucks, glissando, and tremolo), resulting in 63,352 annotated labels. The dataset is divided into 79, 10, and 10 songs for the training, validation, and test sets, respectively.
### Data Splits
train, valid, test
## Dataset Creation
### Curation Rationale
Instrument playing technique (IPT) is a key element of musical presentation.
### Source Data
#### Initial Data Collection and Normalization
Dichucheng Li, Monan Zhou
#### Who are the source language producers?
Students from FD-LAMT
### Annotations
#### Annotation process
Guzheng is a polyphonic instrument. In Guzheng performance, notes with different IPTs are usually overlapped and mixed IPTs that can be decomposed into multiple independent IPTs are usually used. Most existing work on IPT detection typically uses datasets with monophonic instrumental solo pieces. This dataset fills a gap in the research field.
#### Who are the annotators?
Students from FD-LAMT
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of music AI industry
### Discussion of Biases
Only for Traditional Chinese Instruments
### Other Known Limitations
Insufficient sample
## Additional Information
### Dataset Curators
Dichucheng Li
### Evaluation
Dichucheng Li, Mingjin Che, Wenwu Meng, Yulun Wu, Yi Yu, Fan Xia and Wei Li. "Frame-Level Multi-Label Playing Technique Detection Using Multi-Scale Network and Self-Attention Mechanism", in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023).
### Licensing Information
### Contributions
Promoting the development of music AI industry
|
[
"# Dataset Card for Guzheng Technique 99 Dataset",
"## Dataset Description\n- Homepage: <URL>\n- Repository: <URL\n- Paper: <URL\n- Leaderboard: <URL\n- Point of Contact: <URL",
"### Dataset Summary\nInstrument playing technique (IPT) is a key element of musical presentation.\n\nGuzheng is a polyphonic instrument. In Guzheng performance, notes with different IPTs are usually overlapped and mixed IPTs that can be decomposed into multiple independent IPTs are usually used. Most existing work on IPT detection typically uses datasets with monophonic instrumental solo pieces. This dataset fills a gap in the research field.\n\nThe dataset comprises 99 Guzheng solo compositions, recorded by professionals in a studio, totaling 9064.6 seconds. It includes seven playing techniques labeled for each note (onset, offset, pitch, vibrato, point note, upward portamento, downward portamento, plucks, glissando, and tremolo), resulting in 63,352 annotated labels. The dataset is divided into 79, 10, and 10 songs for the training, validation, and test sets, respectively.",
"### Supported Tasks and Leaderboards\nMIR, audio classification",
"### Languages\nChinese, English",
"## Dataset Structure",
"### Data Instances\n.zip(.flac, .csv)",
"### Data Fields\nThe dataset comprises 99 Guzheng solo compositions, recorded by professionals in a studio, totaling 9064.6 seconds. It includes seven playing techniques labeled for each note (onset, offset, pitch, vibrato, point note, upward portamento, downward portamento, plucks, glissando, and tremolo), resulting in 63,352 annotated labels. The dataset is divided into 79, 10, and 10 songs for the training, validation, and test sets, respectively.",
"### Data Splits\ntrain, valid, test",
"## Dataset Creation",
"### Curation Rationale\nInstrument playing technique (IPT) is a key element of musical presentation.",
"### Source Data",
"#### Initial Data Collection and Normalization\nDichucheng Li, Monan Zhou",
"#### Who are the source language producers?\nStudents from FD-LAMT",
"### Annotations",
"#### Annotation process\nGuzheng is a polyphonic instrument. In Guzheng performance, notes with different IPTs are usually overlapped and mixed IPTs that can be decomposed into multiple independent IPTs are usually used. Most existing work on IPT detection typically uses datasets with monophonic instrumental solo pieces. This dataset fills a gap in the research field.",
"#### Who are the annotators?\nStudents from FD-LAMT",
"### Personal and Sensitive Information\nNone",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nPromoting the development of music AI industry",
"### Discussion of Biases\nOnly for Traditional Chinese Instruments",
"### Other Known Limitations\nInsufficient sample",
"## Additional Information",
"### Dataset Curators\nDichucheng Li",
"### Evaluation\nDichucheng Li, Mingjin Che, Wenwu Meng, Yulun Wu, Yi Yu, Fan Xia and Wei Li. \"Frame-Level Multi-Label Playing Technique Detection Using Multi-Scale Network and Self-Attention Mechanism\", in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023).",
"### Licensing Information",
"### Contributions\nPromoting the development of music AI industry"
] |
[
"TAGS\n#task_categories-audio-classification #size_categories-n<1K #language-Chinese #language-English #license-mit #music #art #arxiv-2303.13272 #region-us \n",
"# Dataset Card for Guzheng Technique 99 Dataset",
"## Dataset Description\n- Homepage: <URL>\n- Repository: <URL\n- Paper: <URL\n- Leaderboard: <URL\n- Point of Contact: <URL",
"### Dataset Summary\nInstrument playing technique (IPT) is a key element of musical presentation.\n\nGuzheng is a polyphonic instrument. In Guzheng performance, notes with different IPTs are usually overlapped and mixed IPTs that can be decomposed into multiple independent IPTs are usually used. Most existing work on IPT detection typically uses datasets with monophonic instrumental solo pieces. This dataset fills a gap in the research field.\n\nThe dataset comprises 99 Guzheng solo compositions, recorded by professionals in a studio, totaling 9064.6 seconds. It includes seven playing techniques labeled for each note (onset, offset, pitch, vibrato, point note, upward portamento, downward portamento, plucks, glissando, and tremolo), resulting in 63,352 annotated labels. The dataset is divided into 79, 10, and 10 songs for the training, validation, and test sets, respectively.",
"### Supported Tasks and Leaderboards\nMIR, audio classification",
"### Languages\nChinese, English",
"## Dataset Structure",
"### Data Instances\n.zip(.flac, .csv)",
"### Data Fields\nThe dataset comprises 99 Guzheng solo compositions, recorded by professionals in a studio, totaling 9064.6 seconds. It includes seven playing techniques labeled for each note (onset, offset, pitch, vibrato, point note, upward portamento, downward portamento, plucks, glissando, and tremolo), resulting in 63,352 annotated labels. The dataset is divided into 79, 10, and 10 songs for the training, validation, and test sets, respectively.",
"### Data Splits\ntrain, valid, test",
"## Dataset Creation",
"### Curation Rationale\nInstrument playing technique (IPT) is a key element of musical presentation.",
"### Source Data",
"#### Initial Data Collection and Normalization\nDichucheng Li, Monan Zhou",
"#### Who are the source language producers?\nStudents from FD-LAMT",
"### Annotations",
"#### Annotation process\nGuzheng is a polyphonic instrument. In Guzheng performance, notes with different IPTs are usually overlapped and mixed IPTs that can be decomposed into multiple independent IPTs are usually used. Most existing work on IPT detection typically uses datasets with monophonic instrumental solo pieces. This dataset fills a gap in the research field.",
"#### Who are the annotators?\nStudents from FD-LAMT",
"### Personal and Sensitive Information\nNone",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nPromoting the development of music AI industry",
"### Discussion of Biases\nOnly for Traditional Chinese Instruments",
"### Other Known Limitations\nInsufficient sample",
"## Additional Information",
"### Dataset Curators\nDichucheng Li",
"### Evaluation\nDichucheng Li, Mingjin Che, Wenwu Meng, Yulun Wu, Yi Yu, Fan Xia and Wei Li. \"Frame-Level Multi-Label Playing Technique Detection Using Multi-Scale Network and Self-Attention Mechanism\", in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023).",
"### Licensing Information",
"### Contributions\nPromoting the development of music AI industry"
] |
[
54,
12,
35,
217,
16,
7,
6,
19,
120,
10,
5,
22,
4,
19,
18,
5,
86,
17,
10,
8,
15,
15,
10,
5,
10,
84,
6,
13
] |
[
"passage: TAGS\n#task_categories-audio-classification #size_categories-n<1K #language-Chinese #language-English #license-mit #music #art #arxiv-2303.13272 #region-us \n# Dataset Card for Guzheng Technique 99 Dataset## Dataset Description\n- Homepage: <URL>\n- Repository: <URL\n- Paper: <URL\n- Leaderboard: <URL\n- Point of Contact: <URL### Dataset Summary\nInstrument playing technique (IPT) is a key element of musical presentation.\n\nGuzheng is a polyphonic instrument. In Guzheng performance, notes with different IPTs are usually overlapped and mixed IPTs that can be decomposed into multiple independent IPTs are usually used. Most existing work on IPT detection typically uses datasets with monophonic instrumental solo pieces. This dataset fills a gap in the research field.\n\nThe dataset comprises 99 Guzheng solo compositions, recorded by professionals in a studio, totaling 9064.6 seconds. It includes seven playing techniques labeled for each note (onset, offset, pitch, vibrato, point note, upward portamento, downward portamento, plucks, glissando, and tremolo), resulting in 63,352 annotated labels. The dataset is divided into 79, 10, and 10 songs for the training, validation, and test sets, respectively.### Supported Tasks and Leaderboards\nMIR, audio classification### Languages\nChinese, English## Dataset Structure### Data Instances\n.zip(.flac, .csv)### Data Fields\nThe dataset comprises 99 Guzheng solo compositions, recorded by professionals in a studio, totaling 9064.6 seconds. It includes seven playing techniques labeled for each note (onset, offset, pitch, vibrato, point note, upward portamento, downward portamento, plucks, glissando, and tremolo), resulting in 63,352 annotated labels. The dataset is divided into 79, 10, and 10 songs for the training, validation, and test sets, respectively.### Data Splits\ntrain, valid, test## Dataset Creation"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.