sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d358317f373e60aad8af4a2d00459db379812919 |
# Dataset Card for Evaluation run of postbot/distilgpt2-emailgen-V2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/postbot/distilgpt2-emailgen-V2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [postbot/distilgpt2-emailgen-V2](https://huggingface.co/postbot/distilgpt2-emailgen-V2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_postbot__distilgpt2-emailgen-V2_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-13T13:28:50.616028](https://huggingface.co/datasets/open-llm-leaderboard/details_postbot__distilgpt2-emailgen-V2_public/blob/main/results_2023-11-13T13-28-50.616028.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2542066525769912,
"acc_stderr": 0.030683618404772357,
"acc_norm": 0.2547326716552163,
"acc_norm_stderr": 0.031502030622377816,
"mc1": 0.2717258261933905,
"mc1_stderr": 0.015572840452875828,
"mc2": 0.4651319733972654,
"mc2_stderr": 0.016103347289806055,
"em": 0.0,
"em_stderr": 0.0,
"f1": 0.003143875838926175,
"f1_stderr": 0.00031171556932365637
},
"harness|arc:challenge|25": {
"acc": 0.1689419795221843,
"acc_stderr": 0.01094979565248503,
"acc_norm": 0.2098976109215017,
"acc_norm_stderr": 0.011900548748047442
},
"harness|hellaswag|10": {
"acc": 0.26598287193786097,
"acc_stderr": 0.004409521343140109,
"acc_norm": 0.26777534355706034,
"acc_norm_stderr": 0.004418948941099411
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.32592592592592595,
"acc_stderr": 0.040491220417025055,
"acc_norm": 0.32592592592592595,
"acc_norm_stderr": 0.040491220417025055
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.16447368421052633,
"acc_stderr": 0.03016753346863271,
"acc_norm": 0.16447368421052633,
"acc_norm_stderr": 0.03016753346863271
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.22641509433962265,
"acc_stderr": 0.025757559893106744,
"acc_norm": 0.22641509433962265,
"acc_norm_stderr": 0.025757559893106744
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.23121387283236994,
"acc_stderr": 0.0321473730202947,
"acc_norm": 0.23121387283236994,
"acc_norm_stderr": 0.0321473730202947
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.043898699568087785,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.043898699568087785
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.15,
"acc_stderr": 0.035887028128263714,
"acc_norm": 0.15,
"acc_norm_stderr": 0.035887028128263714
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.23829787234042554,
"acc_stderr": 0.02785125297388979,
"acc_norm": 0.23829787234042554,
"acc_norm_stderr": 0.02785125297388979
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.24561403508771928,
"acc_stderr": 0.04049339297748141,
"acc_norm": 0.24561403508771928,
"acc_norm_stderr": 0.04049339297748141
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2206896551724138,
"acc_stderr": 0.03455930201924811,
"acc_norm": 0.2206896551724138,
"acc_norm_stderr": 0.03455930201924811
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2566137566137566,
"acc_stderr": 0.022494510767503154,
"acc_norm": 0.2566137566137566,
"acc_norm_stderr": 0.022494510767503154
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.15873015873015872,
"acc_stderr": 0.03268454013011743,
"acc_norm": 0.15873015873015872,
"acc_norm_stderr": 0.03268454013011743
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.3161290322580645,
"acc_stderr": 0.02645087448904277,
"acc_norm": 0.3161290322580645,
"acc_norm_stderr": 0.02645087448904277
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2955665024630542,
"acc_stderr": 0.032104944337514575,
"acc_norm": 0.2955665024630542,
"acc_norm_stderr": 0.032104944337514575
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.20606060606060606,
"acc_stderr": 0.03158415324047707,
"acc_norm": 0.20606060606060606,
"acc_norm_stderr": 0.03158415324047707
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.35858585858585856,
"acc_stderr": 0.03416903640391521,
"acc_norm": 0.35858585858585856,
"acc_norm_stderr": 0.03416903640391521
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.22797927461139897,
"acc_stderr": 0.030276909945178256,
"acc_norm": 0.22797927461139897,
"acc_norm_stderr": 0.030276909945178256
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2128205128205128,
"acc_stderr": 0.020752423722128013,
"acc_norm": 0.2128205128205128,
"acc_norm_stderr": 0.020752423722128013
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.29259259259259257,
"acc_stderr": 0.027738969632176088,
"acc_norm": 0.29259259259259257,
"acc_norm_stderr": 0.027738969632176088
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.21428571428571427,
"acc_stderr": 0.02665353159671548,
"acc_norm": 0.21428571428571427,
"acc_norm_stderr": 0.02665353159671548
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2052980132450331,
"acc_stderr": 0.03297986648473835,
"acc_norm": 0.2052980132450331,
"acc_norm_stderr": 0.03297986648473835
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.21467889908256882,
"acc_stderr": 0.01760430414925648,
"acc_norm": 0.21467889908256882,
"acc_norm_stderr": 0.01760430414925648
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4675925925925926,
"acc_stderr": 0.03402801581358966,
"acc_norm": 0.4675925925925926,
"acc_norm_stderr": 0.03402801581358966
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.24019607843137256,
"acc_stderr": 0.02998373305591361,
"acc_norm": 0.24019607843137256,
"acc_norm_stderr": 0.02998373305591361
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2616033755274262,
"acc_stderr": 0.028609516716994934,
"acc_norm": 0.2616033755274262,
"acc_norm_stderr": 0.028609516716994934
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.2062780269058296,
"acc_stderr": 0.027157150479563824,
"acc_norm": 0.2062780269058296,
"acc_norm_stderr": 0.027157150479563824
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2595419847328244,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.2595419847328244,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.256198347107438,
"acc_stderr": 0.03984979653302872,
"acc_norm": 0.256198347107438,
"acc_norm_stderr": 0.03984979653302872
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.043300437496507437,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.043300437496507437
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.3006134969325153,
"acc_stderr": 0.03602511318806771,
"acc_norm": 0.3006134969325153,
"acc_norm_stderr": 0.03602511318806771
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.24107142857142858,
"acc_stderr": 0.04059867246952687,
"acc_norm": 0.24107142857142858,
"acc_norm_stderr": 0.04059867246952687
},
"harness|hendrycksTest-management|5": {
"acc": 0.17475728155339806,
"acc_stderr": 0.037601780060266224,
"acc_norm": 0.17475728155339806,
"acc_norm_stderr": 0.037601780060266224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.20512820512820512,
"acc_stderr": 0.02645350805404035,
"acc_norm": 0.20512820512820512,
"acc_norm_stderr": 0.02645350805404035
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.2656449553001277,
"acc_stderr": 0.015794302487888726,
"acc_norm": 0.2656449553001277,
"acc_norm_stderr": 0.015794302487888726
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.2138728323699422,
"acc_stderr": 0.02207570925175717,
"acc_norm": 0.2138728323699422,
"acc_norm_stderr": 0.02207570925175717
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.014333522059217889,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.014333522059217889
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.238562091503268,
"acc_stderr": 0.024404394928087873,
"acc_norm": 0.238562091503268,
"acc_norm_stderr": 0.024404394928087873
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2990353697749196,
"acc_stderr": 0.026003301117885135,
"acc_norm": 0.2990353697749196,
"acc_norm_stderr": 0.026003301117885135
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2962962962962963,
"acc_stderr": 0.02540719779889016,
"acc_norm": 0.2962962962962963,
"acc_norm_stderr": 0.02540719779889016
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.23404255319148937,
"acc_stderr": 0.025257861359432407,
"acc_norm": 0.23404255319148937,
"acc_norm_stderr": 0.025257861359432407
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2392438070404172,
"acc_stderr": 0.010896123652676651,
"acc_norm": 0.2392438070404172,
"acc_norm_stderr": 0.010896123652676651
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4264705882352941,
"acc_stderr": 0.030042615832714854,
"acc_norm": 0.4264705882352941,
"acc_norm_stderr": 0.030042615832714854
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.26633986928104575,
"acc_stderr": 0.017883188134667178,
"acc_norm": 0.26633986928104575,
"acc_norm_stderr": 0.017883188134667178
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.20909090909090908,
"acc_stderr": 0.038950910157241364,
"acc_norm": 0.20909090909090908,
"acc_norm_stderr": 0.038950910157241364
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.2163265306122449,
"acc_stderr": 0.026358916334904035,
"acc_norm": 0.2163265306122449,
"acc_norm_stderr": 0.026358916334904035
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.23880597014925373,
"acc_stderr": 0.030147775935409224,
"acc_norm": 0.23880597014925373,
"acc_norm_stderr": 0.030147775935409224
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-virology|5": {
"acc": 0.29518072289156627,
"acc_stderr": 0.035509201856896294,
"acc_norm": 0.29518072289156627,
"acc_norm_stderr": 0.035509201856896294
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.21052631578947367,
"acc_stderr": 0.0312678171466318,
"acc_norm": 0.21052631578947367,
"acc_norm_stderr": 0.0312678171466318
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2717258261933905,
"mc1_stderr": 0.015572840452875828,
"mc2": 0.4651319733972654,
"mc2_stderr": 0.016103347289806055
},
"harness|winogrande|5": {
"acc": 0.5201262825572218,
"acc_stderr": 0.01404109666434433
},
"harness|drop|3": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 0.003143875838926175,
"f1_stderr": 0.00031171556932365637
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_postbot__distilgpt2-emailgen-V2 | [
"region:us"
]
| 2023-11-13T13:30:39+00:00 | {"pretty_name": "Evaluation run of postbot/distilgpt2-emailgen-V2", "dataset_summary": "Dataset automatically created during the evaluation run of model [postbot/distilgpt2-emailgen-V2](https://huggingface.co/postbot/distilgpt2-emailgen-V2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_postbot__distilgpt2-emailgen-V2_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-13T13:28:50.616028](https://huggingface.co/datasets/open-llm-leaderboard/details_postbot__distilgpt2-emailgen-V2_public/blob/main/results_2023-11-13T13-28-50.616028.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2542066525769912,\n \"acc_stderr\": 0.030683618404772357,\n \"acc_norm\": 0.2547326716552163,\n \"acc_norm_stderr\": 0.031502030622377816,\n \"mc1\": 0.2717258261933905,\n \"mc1_stderr\": 0.015572840452875828,\n \"mc2\": 0.4651319733972654,\n \"mc2_stderr\": 0.016103347289806055,\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n \"f1\": 0.003143875838926175,\n \"f1_stderr\": 0.00031171556932365637\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.1689419795221843,\n \"acc_stderr\": 0.01094979565248503,\n \"acc_norm\": 0.2098976109215017,\n \"acc_norm_stderr\": 0.011900548748047442\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.26598287193786097,\n \"acc_stderr\": 0.004409521343140109,\n \"acc_norm\": 0.26777534355706034,\n \"acc_norm_stderr\": 0.004418948941099411\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.32592592592592595,\n \"acc_stderr\": 0.040491220417025055,\n \"acc_norm\": 0.32592592592592595,\n \"acc_norm_stderr\": 0.040491220417025055\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.16447368421052633,\n \"acc_stderr\": 0.03016753346863271,\n \"acc_norm\": 0.16447368421052633,\n \"acc_norm_stderr\": 0.03016753346863271\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.22641509433962265,\n \"acc_stderr\": 0.025757559893106744,\n \"acc_norm\": 0.22641509433962265,\n \"acc_norm_stderr\": 0.025757559893106744\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.23121387283236994,\n \"acc_stderr\": 0.0321473730202947,\n \"acc_norm\": 0.23121387283236994,\n \"acc_norm_stderr\": 0.0321473730202947\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.043898699568087785,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.043898699568087785\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.15,\n \"acc_stderr\": 0.035887028128263714,\n \"acc_norm\": 0.15,\n \"acc_norm_stderr\": 0.035887028128263714\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.23829787234042554,\n \"acc_stderr\": 0.02785125297388979,\n \"acc_norm\": 0.23829787234042554,\n \"acc_norm_stderr\": 0.02785125297388979\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.24561403508771928,\n \"acc_stderr\": 0.04049339297748141,\n \"acc_norm\": 0.24561403508771928,\n \"acc_norm_stderr\": 0.04049339297748141\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2206896551724138,\n \"acc_stderr\": 0.03455930201924811,\n \"acc_norm\": 0.2206896551724138,\n \"acc_norm_stderr\": 0.03455930201924811\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2566137566137566,\n \"acc_stderr\": 0.022494510767503154,\n \"acc_norm\": 0.2566137566137566,\n \"acc_norm_stderr\": 0.022494510767503154\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.15873015873015872,\n \"acc_stderr\": 0.03268454013011743,\n \"acc_norm\": 0.15873015873015872,\n \"acc_norm_stderr\": 0.03268454013011743\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.3161290322580645,\n \"acc_stderr\": 0.02645087448904277,\n \"acc_norm\": 0.3161290322580645,\n \"acc_norm_stderr\": 0.02645087448904277\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.2955665024630542,\n \"acc_stderr\": 0.032104944337514575,\n \"acc_norm\": 0.2955665024630542,\n \"acc_norm_stderr\": 0.032104944337514575\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.20606060606060606,\n \"acc_stderr\": 0.03158415324047707,\n \"acc_norm\": 0.20606060606060606,\n \"acc_norm_stderr\": 0.03158415324047707\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.35858585858585856,\n \"acc_stderr\": 0.03416903640391521,\n \"acc_norm\": 0.35858585858585856,\n \"acc_norm_stderr\": 0.03416903640391521\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.22797927461139897,\n \"acc_stderr\": 0.030276909945178256,\n \"acc_norm\": 0.22797927461139897,\n \"acc_norm_stderr\": 0.030276909945178256\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2128205128205128,\n \"acc_stderr\": 0.020752423722128013,\n \"acc_norm\": 0.2128205128205128,\n \"acc_norm_stderr\": 0.020752423722128013\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.29259259259259257,\n \"acc_stderr\": 0.027738969632176088,\n \"acc_norm\": 0.29259259259259257,\n \"acc_norm_stderr\": 0.027738969632176088\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.21428571428571427,\n \"acc_stderr\": 0.02665353159671548,\n \"acc_norm\": 0.21428571428571427,\n \"acc_norm_stderr\": 0.02665353159671548\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2052980132450331,\n \"acc_stderr\": 0.03297986648473835,\n \"acc_norm\": 0.2052980132450331,\n \"acc_norm_stderr\": 0.03297986648473835\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.21467889908256882,\n \"acc_stderr\": 0.01760430414925648,\n \"acc_norm\": 0.21467889908256882,\n \"acc_norm_stderr\": 0.01760430414925648\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4675925925925926,\n \"acc_stderr\": 0.03402801581358966,\n \"acc_norm\": 0.4675925925925926,\n \"acc_norm_stderr\": 0.03402801581358966\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.24019607843137256,\n \"acc_stderr\": 0.02998373305591361,\n \"acc_norm\": 0.24019607843137256,\n \"acc_norm_stderr\": 0.02998373305591361\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.2616033755274262,\n \"acc_stderr\": 0.028609516716994934,\n \"acc_norm\": 0.2616033755274262,\n \"acc_norm_stderr\": 0.028609516716994934\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.2062780269058296,\n \"acc_stderr\": 0.027157150479563824,\n \"acc_norm\": 0.2062780269058296,\n \"acc_norm_stderr\": 0.027157150479563824\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.2595419847328244,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.2595419847328244,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.256198347107438,\n \"acc_stderr\": 0.03984979653302872,\n \"acc_norm\": 0.256198347107438,\n \"acc_norm_stderr\": 0.03984979653302872\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.043300437496507437,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.043300437496507437\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.3006134969325153,\n \"acc_stderr\": 0.03602511318806771,\n \"acc_norm\": 0.3006134969325153,\n \"acc_norm_stderr\": 0.03602511318806771\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.24107142857142858,\n \"acc_stderr\": 0.04059867246952687,\n \"acc_norm\": 0.24107142857142858,\n \"acc_norm_stderr\": 0.04059867246952687\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.20512820512820512,\n \"acc_stderr\": 0.02645350805404035,\n \"acc_norm\": 0.20512820512820512,\n \"acc_norm_stderr\": 0.02645350805404035\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2656449553001277,\n \"acc_stderr\": 0.015794302487888726,\n \"acc_norm\": 0.2656449553001277,\n \"acc_norm_stderr\": 0.015794302487888726\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.2138728323699422,\n \"acc_stderr\": 0.02207570925175717,\n \"acc_norm\": 0.2138728323699422,\n \"acc_norm_stderr\": 0.02207570925175717\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n \"acc_stderr\": 0.014333522059217889,\n \"acc_norm\": 0.2424581005586592,\n \"acc_norm_stderr\": 0.014333522059217889\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.238562091503268,\n \"acc_stderr\": 0.024404394928087873,\n \"acc_norm\": 0.238562091503268,\n \"acc_norm_stderr\": 0.024404394928087873\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2990353697749196,\n \"acc_stderr\": 0.026003301117885135,\n \"acc_norm\": 0.2990353697749196,\n \"acc_norm_stderr\": 0.026003301117885135\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.2962962962962963,\n \"acc_stderr\": 0.02540719779889016,\n \"acc_norm\": 0.2962962962962963,\n \"acc_norm_stderr\": 0.02540719779889016\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.23404255319148937,\n \"acc_stderr\": 0.025257861359432407,\n \"acc_norm\": 0.23404255319148937,\n \"acc_norm_stderr\": 0.025257861359432407\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2392438070404172,\n \"acc_stderr\": 0.010896123652676651,\n \"acc_norm\": 0.2392438070404172,\n \"acc_norm_stderr\": 0.010896123652676651\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.4264705882352941,\n \"acc_stderr\": 0.030042615832714854,\n \"acc_norm\": 0.4264705882352941,\n \"acc_norm_stderr\": 0.030042615832714854\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.26633986928104575,\n \"acc_stderr\": 0.017883188134667178,\n \"acc_norm\": 0.26633986928104575,\n \"acc_norm_stderr\": 0.017883188134667178\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.20909090909090908,\n \"acc_stderr\": 0.038950910157241364,\n \"acc_norm\": 0.20909090909090908,\n \"acc_norm_stderr\": 0.038950910157241364\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.2163265306122449,\n \"acc_stderr\": 0.026358916334904035,\n \"acc_norm\": 0.2163265306122449,\n \"acc_norm_stderr\": 0.026358916334904035\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.23880597014925373,\n \"acc_stderr\": 0.030147775935409224,\n \"acc_norm\": 0.23880597014925373,\n \"acc_norm_stderr\": 0.030147775935409224\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.29518072289156627,\n \"acc_stderr\": 0.035509201856896294,\n \"acc_norm\": 0.29518072289156627,\n \"acc_norm_stderr\": 0.035509201856896294\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.21052631578947367,\n \"acc_stderr\": 0.0312678171466318,\n \"acc_norm\": 0.21052631578947367,\n \"acc_norm_stderr\": 0.0312678171466318\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2717258261933905,\n \"mc1_stderr\": 0.015572840452875828,\n \"mc2\": 0.4651319733972654,\n \"mc2_stderr\": 0.016103347289806055\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5201262825572218,\n \"acc_stderr\": 0.01404109666434433\n },\n \"harness|drop|3\": {\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n \"f1\": 0.003143875838926175,\n \"f1_stderr\": 0.00031171556932365637\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/postbot/distilgpt2-emailgen-V2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|arc:challenge|25_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|drop|3_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|gsm8k|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hellaswag|10_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T13-28-50.616028.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["**/details_harness|winogrande|5_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-13T13-28-50.616028.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_13T13_28_50.616028", "path": ["results_2023-11-13T13-28-50.616028.parquet"]}, {"split": "latest", "path": ["results_2023-11-13T13-28-50.616028.parquet"]}]}]} | 2023-11-13T13:31:25+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of postbot/distilgpt2-emailgen-V2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model postbot/distilgpt2-emailgen-V2 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-13T13:28:50.616028(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of postbot/distilgpt2-emailgen-V2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model postbot/distilgpt2-emailgen-V2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T13:28:50.616028(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of postbot/distilgpt2-emailgen-V2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model postbot/distilgpt2-emailgen-V2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T13:28:50.616028(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of postbot/distilgpt2-emailgen-V2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model postbot/distilgpt2-emailgen-V2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-13T13:28:50.616028(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
3f3716a313f306e53553d57b2f11474f2cad113f | # Dataset Card for "zola"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | truong-xuan-linh/zola | [
"region:us"
]
| 2023-11-13T13:48:54+00:00 | {"dataset_info": {"features": [{"name": "bannerImage", "dtype": "image"}, {"name": "en_caption", "dtype": "string"}, {"name": "concat_caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 49802715.406, "num_examples": 1362}], "download_size": 48774124, "dataset_size": 49802715.406}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-15T15:41:45+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "zola"
More Information needed | [
"# Dataset Card for \"zola\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"zola\"\n\nMore Information needed"
]
| [
6,
11
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"zola\"\n\nMore Information needed"
]
|
b9a6f3b7f43788b15c655ce7bac5fe0d62c55731 | # Dataset Card for "oct-object-detection"
Dataset is composed of images with individual object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease.
[Source datataset](https://doi.org/10.1101/2023.03.29.534704) | joseluhf11/oct-object-detection | [
"region:us"
]
| 2023-11-13T13:51:02+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "int64"}}, {"name": "categories", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 666483144.628, "num_examples": 4698}], "download_size": 76903163, "dataset_size": 666483144.628}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-22T08:36:49+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "oct-object-detection"
Dataset is composed of images with individual object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease.
Source datataset | [
"# Dataset Card for \"oct-object-detection\"\nDataset is composed of images with individual object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nSource datataset"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"oct-object-detection\"\nDataset is composed of images with individual object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nSource datataset"
]
| [
6,
66
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"oct-object-detection\"\nDataset is composed of images with individual object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nSource datataset"
]
|
87393953c16f00c31836166e518cb276574a0068 |
# Dataset Card for Evaluation run of EleutherAI/pythia-410m
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/EleutherAI/pythia-410m
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_EleutherAI__pythia-410m_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-13T14:11:57.049362](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__pythia-410m_public/blob/main/results_2023-11-13T14-11-57.049362.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.27276461407760977,
"acc_stderr": 0.03137987193481735,
"acc_norm": 0.27458252512347847,
"acc_norm_stderr": 0.032179450217890426,
"mc1": 0.23745410036719705,
"mc1_stderr": 0.01489627744104184,
"mc2": 0.4121958367286861,
"mc2_stderr": 0.014564451157949564,
"em": 0.0018875838926174498,
"em_stderr": 0.00044451099905590827,
"f1": 0.044600461409396115,
"f1_stderr": 0.0012188499729627125
},
"harness|arc:challenge|25": {
"acc": 0.23122866894197952,
"acc_stderr": 0.012320858834772283,
"acc_norm": 0.2619453924914676,
"acc_norm_stderr": 0.012849054826858115
},
"harness|hellaswag|10": {
"acc": 0.33947420832503483,
"acc_stderr": 0.004725630911520322,
"acc_norm": 0.4084843656642103,
"acc_norm_stderr": 0.00490548949400508
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.2814814814814815,
"acc_stderr": 0.03885004245800255,
"acc_norm": 0.2814814814814815,
"acc_norm_stderr": 0.03885004245800255
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.21710526315789475,
"acc_stderr": 0.03355045304882924,
"acc_norm": 0.21710526315789475,
"acc_norm_stderr": 0.03355045304882924
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2490566037735849,
"acc_stderr": 0.02661648298050171,
"acc_norm": 0.2490566037735849,
"acc_norm_stderr": 0.02661648298050171
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.03745554791462457,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.03745554791462457
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.047609522856952365,
"acc_norm": 0.34,
"acc_norm_stderr": 0.047609522856952365
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.20809248554913296,
"acc_stderr": 0.03095289021774988,
"acc_norm": 0.20809248554913296,
"acc_norm_stderr": 0.03095289021774988
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.24509803921568626,
"acc_stderr": 0.04280105837364395,
"acc_norm": 0.24509803921568626,
"acc_norm_stderr": 0.04280105837364395
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036843,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036843
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.28936170212765955,
"acc_stderr": 0.029644006577009618,
"acc_norm": 0.28936170212765955,
"acc_norm_stderr": 0.029644006577009618
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.22807017543859648,
"acc_stderr": 0.03947152782669415,
"acc_norm": 0.22807017543859648,
"acc_norm_stderr": 0.03947152782669415
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.22758620689655173,
"acc_stderr": 0.03493950380131183,
"acc_norm": 0.22758620689655173,
"acc_norm_stderr": 0.03493950380131183
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.23809523809523808,
"acc_stderr": 0.021935878081184756,
"acc_norm": 0.23809523809523808,
"acc_norm_stderr": 0.021935878081184756
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.29365079365079366,
"acc_stderr": 0.04073524322147125,
"acc_norm": 0.29365079365079366,
"acc_norm_stderr": 0.04073524322147125
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.17,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.17,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.29354838709677417,
"acc_stderr": 0.025906087021319288,
"acc_norm": 0.29354838709677417,
"acc_norm_stderr": 0.025906087021319288
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.26108374384236455,
"acc_stderr": 0.030903796952114475,
"acc_norm": 0.26108374384236455,
"acc_norm_stderr": 0.030903796952114475
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.23636363636363636,
"acc_stderr": 0.033175059300091805,
"acc_norm": 0.23636363636363636,
"acc_norm_stderr": 0.033175059300091805
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.30303030303030304,
"acc_stderr": 0.032742879140268674,
"acc_norm": 0.30303030303030304,
"acc_norm_stderr": 0.032742879140268674
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.23316062176165803,
"acc_stderr": 0.030516111371476008,
"acc_norm": 0.23316062176165803,
"acc_norm_stderr": 0.030516111371476008
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.358974358974359,
"acc_stderr": 0.024321738484602364,
"acc_norm": 0.358974358974359,
"acc_norm_stderr": 0.024321738484602364
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.26296296296296295,
"acc_stderr": 0.026842057873833706,
"acc_norm": 0.26296296296296295,
"acc_norm_stderr": 0.026842057873833706
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.23949579831932774,
"acc_stderr": 0.027722065493361276,
"acc_norm": 0.23949579831932774,
"acc_norm_stderr": 0.027722065493361276
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2913907284768212,
"acc_stderr": 0.03710185726119995,
"acc_norm": 0.2913907284768212,
"acc_norm_stderr": 0.03710185726119995
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.326605504587156,
"acc_stderr": 0.020106990889937303,
"acc_norm": 0.326605504587156,
"acc_norm_stderr": 0.020106990889937303
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.2696078431372549,
"acc_stderr": 0.031145570659486782,
"acc_norm": 0.2696078431372549,
"acc_norm_stderr": 0.031145570659486782
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.22362869198312235,
"acc_stderr": 0.027123298205229972,
"acc_norm": 0.22362869198312235,
"acc_norm_stderr": 0.027123298205229972
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.25112107623318386,
"acc_stderr": 0.029105220833224605,
"acc_norm": 0.25112107623318386,
"acc_norm_stderr": 0.029105220833224605
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2366412213740458,
"acc_stderr": 0.037276735755969195,
"acc_norm": 0.2366412213740458,
"acc_norm_stderr": 0.037276735755969195
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.4049586776859504,
"acc_stderr": 0.04481137755942469,
"acc_norm": 0.4049586776859504,
"acc_norm_stderr": 0.04481137755942469
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.21296296296296297,
"acc_stderr": 0.039578354719809805,
"acc_norm": 0.21296296296296297,
"acc_norm_stderr": 0.039578354719809805
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.2822085889570552,
"acc_stderr": 0.03536117886664743,
"acc_norm": 0.2822085889570552,
"acc_norm_stderr": 0.03536117886664743
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.21428571428571427,
"acc_stderr": 0.03894641120044792,
"acc_norm": 0.21428571428571427,
"acc_norm_stderr": 0.03894641120044792
},
"harness|hendrycksTest-management|5": {
"acc": 0.2621359223300971,
"acc_stderr": 0.04354631077260594,
"acc_norm": 0.2621359223300971,
"acc_norm_stderr": 0.04354631077260594
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.20512820512820512,
"acc_stderr": 0.026453508054040335,
"acc_norm": 0.20512820512820512,
"acc_norm_stderr": 0.026453508054040335
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.25798212005108556,
"acc_stderr": 0.01564583018834895,
"acc_norm": 0.25798212005108556,
"acc_norm_stderr": 0.01564583018834895
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.2543352601156069,
"acc_stderr": 0.02344582627654555,
"acc_norm": 0.2543352601156069,
"acc_norm_stderr": 0.02344582627654555
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2446927374301676,
"acc_stderr": 0.01437816988409845,
"acc_norm": 0.2446927374301676,
"acc_norm_stderr": 0.01437816988409845
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.2549019607843137,
"acc_stderr": 0.024954184324879905,
"acc_norm": 0.2549019607843137,
"acc_norm_stderr": 0.024954184324879905
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.24437299035369775,
"acc_stderr": 0.024406162094668917,
"acc_norm": 0.24437299035369775,
"acc_norm_stderr": 0.024406162094668917
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.22530864197530864,
"acc_stderr": 0.023246202647819753,
"acc_norm": 0.22530864197530864,
"acc_norm_stderr": 0.023246202647819753
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.24468085106382978,
"acc_stderr": 0.025645553622266722,
"acc_norm": 0.24468085106382978,
"acc_norm_stderr": 0.025645553622266722
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.24119947848761408,
"acc_stderr": 0.010926496102034965,
"acc_norm": 0.24119947848761408,
"acc_norm_stderr": 0.010926496102034965
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.44485294117647056,
"acc_stderr": 0.03018753206032938,
"acc_norm": 0.44485294117647056,
"acc_norm_stderr": 0.03018753206032938
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.018120224251484594,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.018120224251484594
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.20909090909090908,
"acc_stderr": 0.03895091015724137,
"acc_norm": 0.20909090909090908,
"acc_norm_stderr": 0.03895091015724137
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.40408163265306124,
"acc_stderr": 0.031414708025865885,
"acc_norm": 0.40408163265306124,
"acc_norm_stderr": 0.031414708025865885
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.23383084577114427,
"acc_stderr": 0.029929415408348398,
"acc_norm": 0.23383084577114427,
"acc_norm_stderr": 0.029929415408348398
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-virology|5": {
"acc": 0.2710843373493976,
"acc_stderr": 0.03460579907553028,
"acc_norm": 0.2710843373493976,
"acc_norm_stderr": 0.03460579907553028
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.27485380116959063,
"acc_stderr": 0.034240429246915824,
"acc_norm": 0.27485380116959063,
"acc_norm_stderr": 0.034240429246915824
},
"harness|truthfulqa:mc|0": {
"mc1": 0.23745410036719705,
"mc1_stderr": 0.01489627744104184,
"mc2": 0.4121958367286861,
"mc2_stderr": 0.014564451157949564
},
"harness|winogrande|5": {
"acc": 0.5311760063141279,
"acc_stderr": 0.014025142640639518
},
"harness|drop|3": {
"em": 0.0018875838926174498,
"em_stderr": 0.00044451099905590827,
"f1": 0.044600461409396115,
"f1_stderr": 0.0012188499729627125
},
"harness|gsm8k|5": {
"acc": 0.006823351023502654,
"acc_stderr": 0.0022675371022545013
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_EleutherAI__pythia-410m | [
"region:us"
]
| 2023-11-13T14:14:11+00:00 | {"pretty_name": "Evaluation run of EleutherAI/pythia-410m", "dataset_summary": "Dataset automatically created during the evaluation run of model [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_EleutherAI__pythia-410m_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-13T14:11:57.049362](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__pythia-410m_public/blob/main/results_2023-11-13T14-11-57.049362.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.27276461407760977,\n \"acc_stderr\": 0.03137987193481735,\n \"acc_norm\": 0.27458252512347847,\n \"acc_norm_stderr\": 0.032179450217890426,\n \"mc1\": 0.23745410036719705,\n \"mc1_stderr\": 0.01489627744104184,\n \"mc2\": 0.4121958367286861,\n \"mc2_stderr\": 0.014564451157949564,\n \"em\": 0.0018875838926174498,\n \"em_stderr\": 0.00044451099905590827,\n \"f1\": 0.044600461409396115,\n \"f1_stderr\": 0.0012188499729627125\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.23122866894197952,\n \"acc_stderr\": 0.012320858834772283,\n \"acc_norm\": 0.2619453924914676,\n \"acc_norm_stderr\": 0.012849054826858115\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.33947420832503483,\n \"acc_stderr\": 0.004725630911520322,\n \"acc_norm\": 0.4084843656642103,\n \"acc_norm_stderr\": 0.00490548949400508\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2814814814814815,\n \"acc_stderr\": 0.03885004245800255,\n \"acc_norm\": 0.2814814814814815,\n \"acc_norm_stderr\": 0.03885004245800255\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.21710526315789475,\n \"acc_stderr\": 0.03355045304882924,\n \"acc_norm\": 0.21710526315789475,\n \"acc_norm_stderr\": 0.03355045304882924\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.2490566037735849,\n \"acc_stderr\": 0.02661648298050171,\n \"acc_norm\": 0.2490566037735849,\n \"acc_norm_stderr\": 0.02661648298050171\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.03745554791462457,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.03745554791462457\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952365,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952365\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.20809248554913296,\n \"acc_stderr\": 0.03095289021774988,\n \"acc_norm\": 0.20809248554913296,\n \"acc_norm_stderr\": 0.03095289021774988\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.24509803921568626,\n \"acc_stderr\": 0.04280105837364395,\n \"acc_norm\": 0.24509803921568626,\n \"acc_norm_stderr\": 0.04280105837364395\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036843,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036843\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.28936170212765955,\n \"acc_stderr\": 0.029644006577009618,\n \"acc_norm\": 0.28936170212765955,\n \"acc_norm_stderr\": 0.029644006577009618\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.22807017543859648,\n \"acc_stderr\": 0.03947152782669415,\n \"acc_norm\": 0.22807017543859648,\n \"acc_norm_stderr\": 0.03947152782669415\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.22758620689655173,\n \"acc_stderr\": 0.03493950380131183,\n \"acc_norm\": 0.22758620689655173,\n \"acc_norm_stderr\": 0.03493950380131183\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.23809523809523808,\n \"acc_stderr\": 0.021935878081184756,\n \"acc_norm\": 0.23809523809523808,\n \"acc_norm_stderr\": 0.021935878081184756\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.29365079365079366,\n \"acc_stderr\": 0.04073524322147125,\n \"acc_norm\": 0.29365079365079366,\n \"acc_norm_stderr\": 0.04073524322147125\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.17,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.17,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.29354838709677417,\n \"acc_stderr\": 0.025906087021319288,\n \"acc_norm\": 0.29354838709677417,\n \"acc_norm_stderr\": 0.025906087021319288\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.26108374384236455,\n \"acc_stderr\": 0.030903796952114475,\n \"acc_norm\": 0.26108374384236455,\n \"acc_norm_stderr\": 0.030903796952114475\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.23636363636363636,\n \"acc_stderr\": 0.033175059300091805,\n \"acc_norm\": 0.23636363636363636,\n \"acc_norm_stderr\": 0.033175059300091805\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.30303030303030304,\n \"acc_stderr\": 0.032742879140268674,\n \"acc_norm\": 0.30303030303030304,\n \"acc_norm_stderr\": 0.032742879140268674\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.23316062176165803,\n \"acc_stderr\": 0.030516111371476008,\n \"acc_norm\": 0.23316062176165803,\n \"acc_norm_stderr\": 0.030516111371476008\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.358974358974359,\n \"acc_stderr\": 0.024321738484602364,\n \"acc_norm\": 0.358974358974359,\n \"acc_norm_stderr\": 0.024321738484602364\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26296296296296295,\n \"acc_stderr\": 0.026842057873833706,\n \"acc_norm\": 0.26296296296296295,\n \"acc_norm_stderr\": 0.026842057873833706\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.23949579831932774,\n \"acc_stderr\": 0.027722065493361276,\n \"acc_norm\": 0.23949579831932774,\n \"acc_norm_stderr\": 0.027722065493361276\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2913907284768212,\n \"acc_stderr\": 0.03710185726119995,\n \"acc_norm\": 0.2913907284768212,\n \"acc_norm_stderr\": 0.03710185726119995\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.326605504587156,\n \"acc_stderr\": 0.020106990889937303,\n \"acc_norm\": 0.326605504587156,\n \"acc_norm_stderr\": 0.020106990889937303\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.2696078431372549,\n \"acc_stderr\": 0.031145570659486782,\n \"acc_norm\": 0.2696078431372549,\n \"acc_norm_stderr\": 0.031145570659486782\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.22362869198312235,\n \"acc_stderr\": 0.027123298205229972,\n \"acc_norm\": 0.22362869198312235,\n \"acc_norm_stderr\": 0.027123298205229972\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.25112107623318386,\n \"acc_stderr\": 0.029105220833224605,\n \"acc_norm\": 0.25112107623318386,\n \"acc_norm_stderr\": 0.029105220833224605\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.2366412213740458,\n \"acc_stderr\": 0.037276735755969195,\n \"acc_norm\": 0.2366412213740458,\n \"acc_norm_stderr\": 0.037276735755969195\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.4049586776859504,\n \"acc_stderr\": 0.04481137755942469,\n \"acc_norm\": 0.4049586776859504,\n \"acc_norm_stderr\": 0.04481137755942469\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.21296296296296297,\n \"acc_stderr\": 0.039578354719809805,\n \"acc_norm\": 0.21296296296296297,\n \"acc_norm_stderr\": 0.039578354719809805\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.2822085889570552,\n \"acc_stderr\": 0.03536117886664743,\n \"acc_norm\": 0.2822085889570552,\n \"acc_norm_stderr\": 0.03536117886664743\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.21428571428571427,\n \"acc_stderr\": 0.03894641120044792,\n \"acc_norm\": 0.21428571428571427,\n \"acc_norm_stderr\": 0.03894641120044792\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.2621359223300971,\n \"acc_stderr\": 0.04354631077260594,\n \"acc_norm\": 0.2621359223300971,\n \"acc_norm_stderr\": 0.04354631077260594\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.20512820512820512,\n \"acc_stderr\": 0.026453508054040335,\n \"acc_norm\": 0.20512820512820512,\n \"acc_norm_stderr\": 0.026453508054040335\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.25798212005108556,\n \"acc_stderr\": 0.01564583018834895,\n \"acc_norm\": 0.25798212005108556,\n \"acc_norm_stderr\": 0.01564583018834895\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.2543352601156069,\n \"acc_stderr\": 0.02344582627654555,\n \"acc_norm\": 0.2543352601156069,\n \"acc_norm_stderr\": 0.02344582627654555\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2446927374301676,\n \"acc_stderr\": 0.01437816988409845,\n \"acc_norm\": 0.2446927374301676,\n \"acc_norm_stderr\": 0.01437816988409845\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.2549019607843137,\n \"acc_stderr\": 0.024954184324879905,\n \"acc_norm\": 0.2549019607843137,\n \"acc_norm_stderr\": 0.024954184324879905\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.24437299035369775,\n \"acc_stderr\": 0.024406162094668917,\n \"acc_norm\": 0.24437299035369775,\n \"acc_norm_stderr\": 0.024406162094668917\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.22530864197530864,\n \"acc_stderr\": 0.023246202647819753,\n \"acc_norm\": 0.22530864197530864,\n \"acc_norm_stderr\": 0.023246202647819753\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.24468085106382978,\n \"acc_stderr\": 0.025645553622266722,\n \"acc_norm\": 0.24468085106382978,\n \"acc_norm_stderr\": 0.025645553622266722\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.24119947848761408,\n \"acc_stderr\": 0.010926496102034965,\n \"acc_norm\": 0.24119947848761408,\n \"acc_norm_stderr\": 0.010926496102034965\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.44485294117647056,\n \"acc_stderr\": 0.03018753206032938,\n \"acc_norm\": 0.44485294117647056,\n \"acc_norm_stderr\": 0.03018753206032938\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.018120224251484594,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.018120224251484594\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.20909090909090908,\n \"acc_stderr\": 0.03895091015724137,\n \"acc_norm\": 0.20909090909090908,\n \"acc_norm_stderr\": 0.03895091015724137\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.40408163265306124,\n \"acc_stderr\": 0.031414708025865885,\n \"acc_norm\": 0.40408163265306124,\n \"acc_norm_stderr\": 0.031414708025865885\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.23383084577114427,\n \"acc_stderr\": 0.029929415408348398,\n \"acc_norm\": 0.23383084577114427,\n \"acc_norm_stderr\": 0.029929415408348398\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2710843373493976,\n \"acc_stderr\": 0.03460579907553028,\n \"acc_norm\": 0.2710843373493976,\n \"acc_norm_stderr\": 0.03460579907553028\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.27485380116959063,\n \"acc_stderr\": 0.034240429246915824,\n \"acc_norm\": 0.27485380116959063,\n \"acc_norm_stderr\": 0.034240429246915824\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.23745410036719705,\n \"mc1_stderr\": 0.01489627744104184,\n \"mc2\": 0.4121958367286861,\n \"mc2_stderr\": 0.014564451157949564\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5311760063141279,\n \"acc_stderr\": 0.014025142640639518\n },\n \"harness|drop|3\": {\n \"em\": 0.0018875838926174498,\n \"em_stderr\": 0.00044451099905590827,\n \"f1\": 0.044600461409396115,\n \"f1_stderr\": 0.0012188499729627125\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006823351023502654,\n \"acc_stderr\": 0.0022675371022545013\n }\n}\n```", "repo_url": "https://huggingface.co/EleutherAI/pythia-410m", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|arc:challenge|25_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|drop|3_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|gsm8k|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hellaswag|10_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T14-11-57.049362.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["**/details_harness|winogrande|5_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-13T14-11-57.049362.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_13T14_11_57.049362", "path": ["results_2023-11-13T14-11-57.049362.parquet"]}, {"split": "latest", "path": ["results_2023-11-13T14-11-57.049362.parquet"]}]}]} | 2023-11-13T14:14:55+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of EleutherAI/pythia-410m
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model EleutherAI/pythia-410m on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-13T14:11:57.049362(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of EleutherAI/pythia-410m",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model EleutherAI/pythia-410m on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T14:11:57.049362(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of EleutherAI/pythia-410m",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model EleutherAI/pythia-410m on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T14:11:57.049362(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
19,
31,
168,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of EleutherAI/pythia-410m## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model EleutherAI/pythia-410m on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-13T14:11:57.049362(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
ff505c1d7e320dcc4dc85ba5d6a002c058174c00 | # Translated STS dataset to Norwegian Bokmål
Machine translated using the *No language left behind* model series, specifically the 1.3B variant: https://huggingface.co/facebook/nllb-200-distilled-1.3B | tollefj/stsbenchmark-sts-NOB | [
"license:cc-by-4.0",
"region:us"
]
| 2023-11-13T14:30:48+00:00 | {"license": "cc-by-4.0"} | 2024-01-06T12:27:15+00:00 | []
| []
| TAGS
#license-cc-by-4.0 #region-us
| # Translated STS dataset to Norwegian Bokmål
Machine translated using the *No language left behind* model series, specifically the 1.3B variant: URL | [
"# Translated STS dataset to Norwegian Bokmål\nMachine translated using the *No language left behind* model series, specifically the 1.3B variant: URL"
]
| [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"# Translated STS dataset to Norwegian Bokmål\nMachine translated using the *No language left behind* model series, specifically the 1.3B variant: URL"
]
| [
15,
34
]
| [
"passage: TAGS\n#license-cc-by-4.0 #region-us \n# Translated STS dataset to Norwegian Bokmål\nMachine translated using the *No language left behind* model series, specifically the 1.3B variant: URL"
]
|
b44d3dd23f2ce568aae900b835d9fb9ee2530eb4 | # Dataset Card for "mistral-intent-data-1732"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pankajemplay/mistral-intent-data-1732 | [
"region:us"
]
| 2023-11-13T14:31:31+00:00 | {"dataset_info": {"features": [{"name": "User Query", "dtype": "string"}, {"name": "Intent", "dtype": "string"}, {"name": "id type", "dtype": "string"}, {"name": "id value", "dtype": "string"}, {"name": "id slot filled", "dtype": "bool"}, {"name": "Task", "dtype": "string"}, {"name": "task slot filled", "dtype": "bool"}, {"name": "Bot Response", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1286238, "num_examples": 1732}], "download_size": 271889, "dataset_size": 1286238}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T14:31:33+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mistral-intent-data-1732"
More Information needed | [
"# Dataset Card for \"mistral-intent-data-1732\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mistral-intent-data-1732\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mistral-intent-data-1732\"\n\nMore Information needed"
]
|
6e31d7d31b16876a2fd49936b49d04c057669ae4 |
# Dataset Card for Evaluation run of L-R/LLmRa-2.7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/L-R/LLmRa-2.7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [L-R/LLmRa-2.7B](https://huggingface.co/L-R/LLmRa-2.7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_L-R__LLmRa-2.7B_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-13T14:52:35.782186](https://huggingface.co/datasets/open-llm-leaderboard/details_L-R__LLmRa-2.7B_public/blob/main/results_2023-11-13T14-52-35.782186.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2619182180653927,
"acc_stderr": 0.031054877346083407,
"acc_norm": 0.2636967484818349,
"acc_norm_stderr": 0.031856551298856575,
"mc1": 0.22643818849449204,
"mc1_stderr": 0.014651337324602581,
"mc2": 0.3522535522108365,
"mc2_stderr": 0.01379814047299605,
"em": 0.0009437919463087249,
"em_stderr": 0.0003144653119413285,
"f1": 0.04760067114093977,
"f1_stderr": 0.0011764663842453984
},
"harness|arc:challenge|25": {
"acc": 0.32081911262798635,
"acc_stderr": 0.013640943091946526,
"acc_norm": 0.3703071672354949,
"acc_norm_stderr": 0.01411129875167495
},
"harness|hellaswag|10": {
"acc": 0.4561840270862378,
"acc_stderr": 0.004970585328297622,
"acc_norm": 0.6064528978291177,
"acc_norm_stderr": 0.0048753793520798245
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04072314811876837,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04072314811876837
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.21052631578947367,
"acc_stderr": 0.033176727875331574,
"acc_norm": 0.21052631578947367,
"acc_norm_stderr": 0.033176727875331574
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.22264150943396227,
"acc_stderr": 0.0256042334708991,
"acc_norm": 0.22264150943396227,
"acc_norm_stderr": 0.0256042334708991
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2638888888888889,
"acc_stderr": 0.03685651095897532,
"acc_norm": 0.2638888888888889,
"acc_norm_stderr": 0.03685651095897532
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536955,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536955
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.23699421965317918,
"acc_stderr": 0.03242414757483098,
"acc_norm": 0.23699421965317918,
"acc_norm_stderr": 0.03242414757483098
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.23529411764705882,
"acc_stderr": 0.04220773659171452,
"acc_norm": 0.23529411764705882,
"acc_norm_stderr": 0.04220773659171452
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2425531914893617,
"acc_stderr": 0.028020226271200217,
"acc_norm": 0.2425531914893617,
"acc_norm_stderr": 0.028020226271200217
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2543859649122807,
"acc_stderr": 0.040969851398436695,
"acc_norm": 0.2543859649122807,
"acc_norm_stderr": 0.040969851398436695
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.296551724137931,
"acc_stderr": 0.03806142687309993,
"acc_norm": 0.296551724137931,
"acc_norm_stderr": 0.03806142687309993
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.23015873015873015,
"acc_stderr": 0.021679219663693135,
"acc_norm": 0.23015873015873015,
"acc_norm_stderr": 0.021679219663693135
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.15079365079365079,
"acc_stderr": 0.03200686497287394,
"acc_norm": 0.15079365079365079,
"acc_norm_stderr": 0.03200686497287394
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.19032258064516128,
"acc_stderr": 0.02233170761182307,
"acc_norm": 0.19032258064516128,
"acc_norm_stderr": 0.02233170761182307
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3054187192118227,
"acc_stderr": 0.03240661565868408,
"acc_norm": 0.3054187192118227,
"acc_norm_stderr": 0.03240661565868408
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.24242424242424243,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.24242424242424243,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.21212121212121213,
"acc_stderr": 0.029126522834586804,
"acc_norm": 0.21212121212121213,
"acc_norm_stderr": 0.029126522834586804
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.21761658031088082,
"acc_stderr": 0.029778663037752943,
"acc_norm": 0.21761658031088082,
"acc_norm_stderr": 0.029778663037752943
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2358974358974359,
"acc_stderr": 0.021525965407408726,
"acc_norm": 0.2358974358974359,
"acc_norm_stderr": 0.021525965407408726
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2814814814814815,
"acc_stderr": 0.027420019350945277,
"acc_norm": 0.2814814814814815,
"acc_norm_stderr": 0.027420019350945277
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.23109243697478993,
"acc_stderr": 0.027381406927868956,
"acc_norm": 0.23109243697478993,
"acc_norm_stderr": 0.027381406927868956
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.25165562913907286,
"acc_stderr": 0.03543304234389985,
"acc_norm": 0.25165562913907286,
"acc_norm_stderr": 0.03543304234389985
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.23302752293577983,
"acc_stderr": 0.018125669180861507,
"acc_norm": 0.23302752293577983,
"acc_norm_stderr": 0.018125669180861507
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.2175925925925926,
"acc_stderr": 0.028139689444859693,
"acc_norm": 0.2175925925925926,
"acc_norm_stderr": 0.028139689444859693
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.23039215686274508,
"acc_stderr": 0.029554292605695066,
"acc_norm": 0.23039215686274508,
"acc_norm_stderr": 0.029554292605695066
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.29535864978902954,
"acc_stderr": 0.029696338713422893,
"acc_norm": 0.29535864978902954,
"acc_norm_stderr": 0.029696338713422893
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.21076233183856502,
"acc_stderr": 0.02737309550054019,
"acc_norm": 0.21076233183856502,
"acc_norm_stderr": 0.02737309550054019
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.22900763358778625,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.22900763358778625,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.24793388429752067,
"acc_stderr": 0.039418975265163025,
"acc_norm": 0.24793388429752067,
"acc_norm_stderr": 0.039418975265163025
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.042365112580946315,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.042365112580946315
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.2822085889570552,
"acc_stderr": 0.03536117886664743,
"acc_norm": 0.2822085889570552,
"acc_norm_stderr": 0.03536117886664743
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.30357142857142855,
"acc_stderr": 0.04364226155841044,
"acc_norm": 0.30357142857142855,
"acc_norm_stderr": 0.04364226155841044
},
"harness|hendrycksTest-management|5": {
"acc": 0.33980582524271846,
"acc_stderr": 0.046897659372781335,
"acc_norm": 0.33980582524271846,
"acc_norm_stderr": 0.046897659372781335
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.23076923076923078,
"acc_stderr": 0.027601921381417614,
"acc_norm": 0.23076923076923078,
"acc_norm_stderr": 0.027601921381417614
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.26,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.26,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.26309067688378035,
"acc_stderr": 0.015745497169049046,
"acc_norm": 0.26309067688378035,
"acc_norm_stderr": 0.015745497169049046
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24566473988439305,
"acc_stderr": 0.023176298203992002,
"acc_norm": 0.24566473988439305,
"acc_norm_stderr": 0.023176298203992002
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23798882681564246,
"acc_stderr": 0.014242630070574915,
"acc_norm": 0.23798882681564246,
"acc_norm_stderr": 0.014242630070574915
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.25163398692810457,
"acc_stderr": 0.0248480182638752,
"acc_norm": 0.25163398692810457,
"acc_norm_stderr": 0.0248480182638752
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.3440514469453376,
"acc_stderr": 0.026981478043648022,
"acc_norm": 0.3440514469453376,
"acc_norm_stderr": 0.026981478043648022
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.20987654320987653,
"acc_stderr": 0.02265834408598136,
"acc_norm": 0.20987654320987653,
"acc_norm_stderr": 0.02265834408598136
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2695035460992908,
"acc_stderr": 0.026469036818590634,
"acc_norm": 0.2695035460992908,
"acc_norm_stderr": 0.026469036818590634
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.24445893089960888,
"acc_stderr": 0.010976425013113899,
"acc_norm": 0.24445893089960888,
"acc_norm_stderr": 0.010976425013113899
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.19117647058823528,
"acc_stderr": 0.023886881922440345,
"acc_norm": 0.19117647058823528,
"acc_norm_stderr": 0.023886881922440345
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.26143790849673204,
"acc_stderr": 0.017776947157528044,
"acc_norm": 0.26143790849673204,
"acc_norm_stderr": 0.017776947157528044
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.36363636363636365,
"acc_stderr": 0.04607582090719976,
"acc_norm": 0.36363636363636365,
"acc_norm_stderr": 0.04607582090719976
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.19591836734693877,
"acc_stderr": 0.025409301953225678,
"acc_norm": 0.19591836734693877,
"acc_norm_stderr": 0.025409301953225678
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.29850746268656714,
"acc_stderr": 0.03235743789355042,
"acc_norm": 0.29850746268656714,
"acc_norm_stderr": 0.03235743789355042
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-virology|5": {
"acc": 0.25301204819277107,
"acc_stderr": 0.033844291552331346,
"acc_norm": 0.25301204819277107,
"acc_norm_stderr": 0.033844291552331346
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.28654970760233917,
"acc_stderr": 0.03467826685703826,
"acc_norm": 0.28654970760233917,
"acc_norm_stderr": 0.03467826685703826
},
"harness|truthfulqa:mc|0": {
"mc1": 0.22643818849449204,
"mc1_stderr": 0.014651337324602581,
"mc2": 0.3522535522108365,
"mc2_stderr": 0.01379814047299605
},
"harness|winogrande|5": {
"acc": 0.6156274664561957,
"acc_stderr": 0.01367156760083619
},
"harness|drop|3": {
"em": 0.0009437919463087249,
"em_stderr": 0.0003144653119413285,
"f1": 0.04760067114093977,
"f1_stderr": 0.0011764663842453984
},
"harness|gsm8k|5": {
"acc": 0.003032600454890068,
"acc_stderr": 0.0015145735612245427
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_L-R__LLmRa-2.7B | [
"region:us"
]
| 2023-11-13T14:54:50+00:00 | {"pretty_name": "Evaluation run of L-R/LLmRa-2.7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [L-R/LLmRa-2.7B](https://huggingface.co/L-R/LLmRa-2.7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_L-R__LLmRa-2.7B_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-13T14:52:35.782186](https://huggingface.co/datasets/open-llm-leaderboard/details_L-R__LLmRa-2.7B_public/blob/main/results_2023-11-13T14-52-35.782186.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2619182180653927,\n \"acc_stderr\": 0.031054877346083407,\n \"acc_norm\": 0.2636967484818349,\n \"acc_norm_stderr\": 0.031856551298856575,\n \"mc1\": 0.22643818849449204,\n \"mc1_stderr\": 0.014651337324602581,\n \"mc2\": 0.3522535522108365,\n \"mc2_stderr\": 0.01379814047299605,\n \"em\": 0.0009437919463087249,\n \"em_stderr\": 0.0003144653119413285,\n \"f1\": 0.04760067114093977,\n \"f1_stderr\": 0.0011764663842453984\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.32081911262798635,\n \"acc_stderr\": 0.013640943091946526,\n \"acc_norm\": 0.3703071672354949,\n \"acc_norm_stderr\": 0.01411129875167495\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.4561840270862378,\n \"acc_stderr\": 0.004970585328297622,\n \"acc_norm\": 0.6064528978291177,\n \"acc_norm_stderr\": 0.0048753793520798245\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.21052631578947367,\n \"acc_stderr\": 0.033176727875331574,\n \"acc_norm\": 0.21052631578947367,\n \"acc_norm_stderr\": 0.033176727875331574\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.22264150943396227,\n \"acc_stderr\": 0.0256042334708991,\n \"acc_norm\": 0.22264150943396227,\n \"acc_norm_stderr\": 0.0256042334708991\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2638888888888889,\n \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.2638888888888889,\n \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536955,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536955\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.23699421965317918,\n \"acc_stderr\": 0.03242414757483098,\n \"acc_norm\": 0.23699421965317918,\n \"acc_norm_stderr\": 0.03242414757483098\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.04220773659171452,\n \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.04220773659171452\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.2425531914893617,\n \"acc_stderr\": 0.028020226271200217,\n \"acc_norm\": 0.2425531914893617,\n \"acc_norm_stderr\": 0.028020226271200217\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2543859649122807,\n \"acc_stderr\": 0.040969851398436695,\n \"acc_norm\": 0.2543859649122807,\n \"acc_norm_stderr\": 0.040969851398436695\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.296551724137931,\n \"acc_stderr\": 0.03806142687309993,\n \"acc_norm\": 0.296551724137931,\n \"acc_norm_stderr\": 0.03806142687309993\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.23015873015873015,\n \"acc_stderr\": 0.021679219663693135,\n \"acc_norm\": 0.23015873015873015,\n \"acc_norm_stderr\": 0.021679219663693135\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.15079365079365079,\n \"acc_stderr\": 0.03200686497287394,\n \"acc_norm\": 0.15079365079365079,\n \"acc_norm_stderr\": 0.03200686497287394\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.19032258064516128,\n \"acc_stderr\": 0.02233170761182307,\n \"acc_norm\": 0.19032258064516128,\n \"acc_norm_stderr\": 0.02233170761182307\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.3054187192118227,\n \"acc_stderr\": 0.03240661565868408,\n \"acc_norm\": 0.3054187192118227,\n \"acc_norm_stderr\": 0.03240661565868408\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.24242424242424243,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.24242424242424243,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.21212121212121213,\n \"acc_stderr\": 0.029126522834586804,\n \"acc_norm\": 0.21212121212121213,\n \"acc_norm_stderr\": 0.029126522834586804\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.21761658031088082,\n \"acc_stderr\": 0.029778663037752943,\n \"acc_norm\": 0.21761658031088082,\n \"acc_norm_stderr\": 0.029778663037752943\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2358974358974359,\n \"acc_stderr\": 0.021525965407408726,\n \"acc_norm\": 0.2358974358974359,\n \"acc_norm_stderr\": 0.021525965407408726\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2814814814814815,\n \"acc_stderr\": 0.027420019350945277,\n \"acc_norm\": 0.2814814814814815,\n \"acc_norm_stderr\": 0.027420019350945277\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.23109243697478993,\n \"acc_stderr\": 0.027381406927868956,\n \"acc_norm\": 0.23109243697478993,\n \"acc_norm_stderr\": 0.027381406927868956\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.25165562913907286,\n \"acc_stderr\": 0.03543304234389985,\n \"acc_norm\": 0.25165562913907286,\n \"acc_norm_stderr\": 0.03543304234389985\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.23302752293577983,\n \"acc_stderr\": 0.018125669180861507,\n \"acc_norm\": 0.23302752293577983,\n \"acc_norm_stderr\": 0.018125669180861507\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.2175925925925926,\n \"acc_stderr\": 0.028139689444859693,\n \"acc_norm\": 0.2175925925925926,\n \"acc_norm_stderr\": 0.028139689444859693\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.23039215686274508,\n \"acc_stderr\": 0.029554292605695066,\n \"acc_norm\": 0.23039215686274508,\n \"acc_norm_stderr\": 0.029554292605695066\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.29535864978902954,\n \"acc_stderr\": 0.029696338713422893,\n \"acc_norm\": 0.29535864978902954,\n \"acc_norm_stderr\": 0.029696338713422893\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.21076233183856502,\n \"acc_stderr\": 0.02737309550054019,\n \"acc_norm\": 0.21076233183856502,\n \"acc_norm_stderr\": 0.02737309550054019\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.22900763358778625,\n \"acc_stderr\": 0.036853466317118506,\n \"acc_norm\": 0.22900763358778625,\n \"acc_norm_stderr\": 0.036853466317118506\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.24793388429752067,\n \"acc_stderr\": 0.039418975265163025,\n \"acc_norm\": 0.24793388429752067,\n \"acc_norm_stderr\": 0.039418975265163025\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n \"acc_stderr\": 0.042365112580946315,\n \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.042365112580946315\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.2822085889570552,\n \"acc_stderr\": 0.03536117886664743,\n \"acc_norm\": 0.2822085889570552,\n \"acc_norm_stderr\": 0.03536117886664743\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.30357142857142855,\n \"acc_stderr\": 0.04364226155841044,\n \"acc_norm\": 0.30357142857142855,\n \"acc_norm_stderr\": 0.04364226155841044\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.33980582524271846,\n \"acc_stderr\": 0.046897659372781335,\n \"acc_norm\": 0.33980582524271846,\n \"acc_norm_stderr\": 0.046897659372781335\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.23076923076923078,\n \"acc_stderr\": 0.027601921381417614,\n \"acc_norm\": 0.23076923076923078,\n \"acc_norm_stderr\": 0.027601921381417614\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.26309067688378035,\n \"acc_stderr\": 0.015745497169049046,\n \"acc_norm\": 0.26309067688378035,\n \"acc_norm_stderr\": 0.015745497169049046\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.24566473988439305,\n \"acc_stderr\": 0.023176298203992002,\n \"acc_norm\": 0.24566473988439305,\n \"acc_norm_stderr\": 0.023176298203992002\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.25163398692810457,\n \"acc_stderr\": 0.0248480182638752,\n \"acc_norm\": 0.25163398692810457,\n \"acc_norm_stderr\": 0.0248480182638752\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.3440514469453376,\n \"acc_stderr\": 0.026981478043648022,\n \"acc_norm\": 0.3440514469453376,\n \"acc_norm_stderr\": 0.026981478043648022\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.20987654320987653,\n \"acc_stderr\": 0.02265834408598136,\n \"acc_norm\": 0.20987654320987653,\n \"acc_norm_stderr\": 0.02265834408598136\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2695035460992908,\n \"acc_stderr\": 0.026469036818590634,\n \"acc_norm\": 0.2695035460992908,\n \"acc_norm_stderr\": 0.026469036818590634\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.24445893089960888,\n \"acc_stderr\": 0.010976425013113899,\n \"acc_norm\": 0.24445893089960888,\n \"acc_norm_stderr\": 0.010976425013113899\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.19117647058823528,\n \"acc_stderr\": 0.023886881922440345,\n \"acc_norm\": 0.19117647058823528,\n \"acc_norm_stderr\": 0.023886881922440345\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.26143790849673204,\n \"acc_stderr\": 0.017776947157528044,\n \"acc_norm\": 0.26143790849673204,\n \"acc_norm_stderr\": 0.017776947157528044\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.36363636363636365,\n \"acc_stderr\": 0.04607582090719976,\n \"acc_norm\": 0.36363636363636365,\n \"acc_norm_stderr\": 0.04607582090719976\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.19591836734693877,\n \"acc_stderr\": 0.025409301953225678,\n \"acc_norm\": 0.19591836734693877,\n \"acc_norm_stderr\": 0.025409301953225678\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.29850746268656714,\n \"acc_stderr\": 0.03235743789355042,\n \"acc_norm\": 0.29850746268656714,\n \"acc_norm_stderr\": 0.03235743789355042\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.25301204819277107,\n \"acc_stderr\": 0.033844291552331346,\n \"acc_norm\": 0.25301204819277107,\n \"acc_norm_stderr\": 0.033844291552331346\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.28654970760233917,\n \"acc_stderr\": 0.03467826685703826,\n \"acc_norm\": 0.28654970760233917,\n \"acc_norm_stderr\": 0.03467826685703826\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.22643818849449204,\n \"mc1_stderr\": 0.014651337324602581,\n \"mc2\": 0.3522535522108365,\n \"mc2_stderr\": 0.01379814047299605\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6156274664561957,\n \"acc_stderr\": 0.01367156760083619\n },\n \"harness|drop|3\": {\n \"em\": 0.0009437919463087249,\n \"em_stderr\": 0.0003144653119413285,\n \"f1\": 0.04760067114093977,\n \"f1_stderr\": 0.0011764663842453984\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.003032600454890068,\n \"acc_stderr\": 0.0015145735612245427\n }\n}\n```", "repo_url": "https://huggingface.co/L-R/LLmRa-2.7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|arc:challenge|25_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|drop|3_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|gsm8k|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hellaswag|10_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T14-52-35.782186.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["**/details_harness|winogrande|5_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-13T14-52-35.782186.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_13T14_52_35.782186", "path": ["results_2023-11-13T14-52-35.782186.parquet"]}, {"split": "latest", "path": ["results_2023-11-13T14-52-35.782186.parquet"]}]}]} | 2023-11-13T14:55:35+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of L-R/LLmRa-2.7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model L-R/LLmRa-2.7B on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-13T14:52:35.782186(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of L-R/LLmRa-2.7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model L-R/LLmRa-2.7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T14:52:35.782186(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of L-R/LLmRa-2.7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model L-R/LLmRa-2.7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T14:52:35.782186(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of L-R/LLmRa-2.7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model L-R/LLmRa-2.7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-13T14:52:35.782186(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
108ba24caa8bec86d5a3bbccfe9c3f9921e03116 | # climateBUG-Data
## Overview
climateBUG-Data is a part of the climateBUG framework. It focuses on analyzing climate-related discussions in EU banks' reporting using computational linguistics.
## Key Features
- **Dataset Composition**: The dataset includes over 1.07 million annotated statements from EU banks' annual and sustainability reports, covering the years 2015 to 2020. It provides an analysis of climate change and finance topics discussed in the European banking sector during this period.
- **Integration with climateBUG Framework**: Designed to be utilized with the climateBUG framework's deep learning model and analytical tools.
## Access and Usage
- Models, dataset and tools are available at the [climateBUG project page](https://www.climatebug.se/).
- Suitable for researchers and professionals in finance, sustainability, and climate policy.
## Licensing and Availability
### Non-Commercial Research
- **License**: The climateBUG-Data, including its models and tools, is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0).
- **Additional Restriction**: Redistribution or sharing of the dataset in any form is not permitted. This additional restriction complements the standard CC BY-NC-ND 4.0 license terms.
- **License Details**: Please review the [CC BY-NC-ND 4.0 license](https://creativecommons.org/licenses/by-nc-nd/4.0/) for complete terms, keeping in mind our specific no-resharing clause.
### Commercial Use
- **Open to Collaboration**: We welcome interest from commercial entities and are open to exploring how the climateBUG-Data can contribute to your projects and initiatives.
- **Tailored Licensing Arrangements**: Understanding that commercial needs can vary, we are ready to discuss customized licensing arrangements that align with your specific requirements.
- **Contact Us**: To discuss potential collaborations and commercial licensing options, please reach out to us at [email protected].
## Citation
Please cite this dataset as follows:
Yu, Y., Scheidegger, S., Elliott, J., & Löfgren, Å. (2024). climateBUG: A data-driven framework for analyzing bank reporting through a climate lens. Expert Systems With Applications, 239, 122162.
```bibtex
@article{yu2024climatebug,
title = {climateBUG : A data-driven framework for analyzing bank reporting through a climate lens},
journal = {Expert Systems with Applications},
volume = {239},
pages = {122162},
year = {2024},
author = {Yinan Yu and Samuel Scheidegger and Jasmine Elliott and Åsa Löfgren}
}
```
## Support and Contact
For support, additional information, or inquiries, please reach out through [email protected] or visit the [climateBUG project page](https://www.climatebug.se/).
| lumilogic/climateBUG-Data | [
"task_categories:text-classification",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-nc-nd-4.0",
"climate",
"finance",
"banking",
"EU",
"region:us"
]
| 2023-11-13T15:01:19+00:00 | {"language": ["en"], "license": "cc-by-nc-nd-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-classification"], "pretty_name": "climateBUG-Data", "tags": ["climate", "finance", "banking", "EU"], "dataset_info": {"features": [{"name": "statement", "dtype": "string"}, {"name": "year", "dtype": "int64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "non-climate", "1": "climate"}}}}, {"name": "manual", "dtype": "bool"}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/climatebug-data-train.parquet"}, {"split": "test", "path": "data/climatebug-data-test.parquet"}]}, {"config_name": "main", "data_files": "data/climatebug-data.parquet"}], "extra_gated_prompt": "Welcome to the access request form for climateBUG-Data. This dataset is available under the CC BY-NC-ND 4.0 license with the additional condition that the dataset cannot be reshared in any form.\nInterested in commercial licensing? Let us know here. We're open to explore how our dataset can support your projects.", "extra_gated_fields": {"Name": "text", "Email": "text", "Affiliation": "text", "Purpose of Use": "text", "I acknowledge the non-commercial use terms of this dataset, unless a commercial license is granted": "checkbox"}, "extra_gated_heading": "Access Request for climateBUG-Data", "extra_gated_button_content": "Request Access"} | 2024-01-02T18:07:12+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #size_categories-1M<n<10M #language-English #license-cc-by-nc-nd-4.0 #climate #finance #banking #EU #region-us
| # climateBUG-Data
## Overview
climateBUG-Data is a part of the climateBUG framework. It focuses on analyzing climate-related discussions in EU banks' reporting using computational linguistics.
## Key Features
- Dataset Composition: The dataset includes over 1.07 million annotated statements from EU banks' annual and sustainability reports, covering the years 2015 to 2020. It provides an analysis of climate change and finance topics discussed in the European banking sector during this period.
- Integration with climateBUG Framework: Designed to be utilized with the climateBUG framework's deep learning model and analytical tools.
## Access and Usage
- Models, dataset and tools are available at the climateBUG project page.
- Suitable for researchers and professionals in finance, sustainability, and climate policy.
## Licensing and Availability
### Non-Commercial Research
- License: The climateBUG-Data, including its models and tools, is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0).
- Additional Restriction: Redistribution or sharing of the dataset in any form is not permitted. This additional restriction complements the standard CC BY-NC-ND 4.0 license terms.
- License Details: Please review the CC BY-NC-ND 4.0 license for complete terms, keeping in mind our specific no-resharing clause.
### Commercial Use
- Open to Collaboration: We welcome interest from commercial entities and are open to exploring how the climateBUG-Data can contribute to your projects and initiatives.
- Tailored Licensing Arrangements: Understanding that commercial needs can vary, we are ready to discuss customized licensing arrangements that align with your specific requirements.
- Contact Us: To discuss potential collaborations and commercial licensing options, please reach out to us at climatebug@URL.
Please cite this dataset as follows:
Yu, Y., Scheidegger, S., Elliott, J., & Löfgren, Å. (2024). climateBUG: A data-driven framework for analyzing bank reporting through a climate lens. Expert Systems With Applications, 239, 122162.
## Support and Contact
For support, additional information, or inquiries, please reach out through climatebug@URL or visit the climateBUG project page.
| [
"# climateBUG-Data",
"## Overview\nclimateBUG-Data is a part of the climateBUG framework. It focuses on analyzing climate-related discussions in EU banks' reporting using computational linguistics.",
"## Key Features\n- Dataset Composition: The dataset includes over 1.07 million annotated statements from EU banks' annual and sustainability reports, covering the years 2015 to 2020. It provides an analysis of climate change and finance topics discussed in the European banking sector during this period.\n- Integration with climateBUG Framework: Designed to be utilized with the climateBUG framework's deep learning model and analytical tools.",
"## Access and Usage\n- Models, dataset and tools are available at the climateBUG project page.\n- Suitable for researchers and professionals in finance, sustainability, and climate policy.",
"## Licensing and Availability",
"### Non-Commercial Research\n- License: The climateBUG-Data, including its models and tools, is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0).\n- Additional Restriction: Redistribution or sharing of the dataset in any form is not permitted. This additional restriction complements the standard CC BY-NC-ND 4.0 license terms.\n- License Details: Please review the CC BY-NC-ND 4.0 license for complete terms, keeping in mind our specific no-resharing clause.",
"### Commercial Use\n- Open to Collaboration: We welcome interest from commercial entities and are open to exploring how the climateBUG-Data can contribute to your projects and initiatives.\n- Tailored Licensing Arrangements: Understanding that commercial needs can vary, we are ready to discuss customized licensing arrangements that align with your specific requirements.\n- Contact Us: To discuss potential collaborations and commercial licensing options, please reach out to us at climatebug@URL.\n\nPlease cite this dataset as follows:\n\nYu, Y., Scheidegger, S., Elliott, J., & Löfgren, Å. (2024). climateBUG: A data-driven framework for analyzing bank reporting through a climate lens. Expert Systems With Applications, 239, 122162.",
"## Support and Contact\nFor support, additional information, or inquiries, please reach out through climatebug@URL or visit the climateBUG project page."
]
| [
"TAGS\n#task_categories-text-classification #size_categories-1M<n<10M #language-English #license-cc-by-nc-nd-4.0 #climate #finance #banking #EU #region-us \n",
"# climateBUG-Data",
"## Overview\nclimateBUG-Data is a part of the climateBUG framework. It focuses on analyzing climate-related discussions in EU banks' reporting using computational linguistics.",
"## Key Features\n- Dataset Composition: The dataset includes over 1.07 million annotated statements from EU banks' annual and sustainability reports, covering the years 2015 to 2020. It provides an analysis of climate change and finance topics discussed in the European banking sector during this period.\n- Integration with climateBUG Framework: Designed to be utilized with the climateBUG framework's deep learning model and analytical tools.",
"## Access and Usage\n- Models, dataset and tools are available at the climateBUG project page.\n- Suitable for researchers and professionals in finance, sustainability, and climate policy.",
"## Licensing and Availability",
"### Non-Commercial Research\n- License: The climateBUG-Data, including its models and tools, is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0).\n- Additional Restriction: Redistribution or sharing of the dataset in any form is not permitted. This additional restriction complements the standard CC BY-NC-ND 4.0 license terms.\n- License Details: Please review the CC BY-NC-ND 4.0 license for complete terms, keeping in mind our specific no-resharing clause.",
"### Commercial Use\n- Open to Collaboration: We welcome interest from commercial entities and are open to exploring how the climateBUG-Data can contribute to your projects and initiatives.\n- Tailored Licensing Arrangements: Understanding that commercial needs can vary, we are ready to discuss customized licensing arrangements that align with your specific requirements.\n- Contact Us: To discuss potential collaborations and commercial licensing options, please reach out to us at climatebug@URL.\n\nPlease cite this dataset as follows:\n\nYu, Y., Scheidegger, S., Elliott, J., & Löfgren, Å. (2024). climateBUG: A data-driven framework for analyzing bank reporting through a climate lens. Expert Systems With Applications, 239, 122162.",
"## Support and Contact\nFor support, additional information, or inquiries, please reach out through climatebug@URL or visit the climateBUG project page."
]
| [
58,
6,
43,
95,
42,
8,
126,
181,
32
]
| [
"passage: TAGS\n#task_categories-text-classification #size_categories-1M<n<10M #language-English #license-cc-by-nc-nd-4.0 #climate #finance #banking #EU #region-us \n# climateBUG-Data## Overview\nclimateBUG-Data is a part of the climateBUG framework. It focuses on analyzing climate-related discussions in EU banks' reporting using computational linguistics.## Key Features\n- Dataset Composition: The dataset includes over 1.07 million annotated statements from EU banks' annual and sustainability reports, covering the years 2015 to 2020. It provides an analysis of climate change and finance topics discussed in the European banking sector during this period.\n- Integration with climateBUG Framework: Designed to be utilized with the climateBUG framework's deep learning model and analytical tools.## Access and Usage\n- Models, dataset and tools are available at the climateBUG project page.\n- Suitable for researchers and professionals in finance, sustainability, and climate policy.## Licensing and Availability### Non-Commercial Research\n- License: The climateBUG-Data, including its models and tools, is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0).\n- Additional Restriction: Redistribution or sharing of the dataset in any form is not permitted. This additional restriction complements the standard CC BY-NC-ND 4.0 license terms.\n- License Details: Please review the CC BY-NC-ND 4.0 license for complete terms, keeping in mind our specific no-resharing clause."
]
|
f0b17638933a6b48aa1e47cbaddf247537297263 | # cloudsen12
***``A dataset about clouds from Sentinel-2``***
CloudSEN12 is a LARGE dataset (~1 TB) for cloud semantic understanding that consists of 49,400 image patches (IP) that are evenly spread throughout all continents except Antarctica. Each IP covers 5090 x 5090 meters and contains data from Sentinel-2 levels 1C and 2A, hand-crafted annotations of thick and thin clouds and cloud shadows, Sentinel-1 Synthetic Aperture Radar (SAR), digital elevation model, surface water occurrence, land cover classes, and cloud mask results from six cutting-edge cloud detection algorithms.
CloudSEN12 is designed to support both weakly and self-/semi-supervised learning strategies by including three distinct forms of hand-crafted labeling data: high-quality, scribble and no-annotation. For more details on how we created the dataset see our paper: CloudSEN12 - a global dataset for semantic understanding of cloud and cloud shadow in Sentinel-2.
**ML-STAC Snippet**
```python
import mlstac
secret = 'https://huggingface.co/datasets/jfloresf/mlstac-demo/resolve/main/main.json'
train_db = mlstac.load(secret, framework='torch', stream=True, device='cpu')
```
<p align="center">
<img src="header.png" />
</p>
**Sensor: Sentinel2 - MSI**
**ML-STAC Task: TensorToTensor, TensorSegmentation**
**Data raw repository: [https://cloudsen12.github.io/](https://cloudsen12.github.io/)**
**Dataset discussion: [https://github.com/IPL-UV/ML-STAC/discussions/2](https://github.com/IPL-UV/ML-STAC/discussions/2)**
**Review mean score: 5.0**
**Split_strategy: random**
**Paper: [https://www.nature.com/articles/s41597-022-01878-2](https://www.nature.com/articles/s41597-022-01878-2)**
## Data Providers
|Name|Role|URL|
| :---: | :---: | :---: |
|Image & Signal Processing|['host']|https://isp.uv.es/|
|ESA|['producer']|https://www.esa.int/|
## Curators
|Name|Organization|URL|
| :---: | :---: | :---: |
|Jair Flores|OEFA|http://jflores.github.io/|
## Reviewers
|Name|Organization|URL|Score|
| :---: | :---: | :---: | :---: |
|Cesar Aybar|Image & Signal Processing|http://csaybar.github.io/|5|
## Labels
|Name|Value|
| :---: | :---: |
|clear|0|
|thick-cloud|1|
|thin-cloud|2|
|cloud-shadow|3|
## Dimensions
### input
|Axis|Name|Description|
| :---: | :---: | :---: |
|0|C|Spectral bands|
|1|H|Height|
|2|W|Width|
### target
|Axis|Name|Description|
| :---: | :---: | :---: |
|0|C|Hand-crafted labels|
|1|H|Height|
|2|W|Width|
## Spectral Bands
|Name|Common Name|Description|Center Wavelength|Full Width Half Max|Index|
| :---: | :---: | :---: | :---: | :---: | :---: |
|B01|coastal aerosol|Band 1 - Coastal aerosol - 60m|443.5|17.0|0|
|B02|blue|Band 2 - Blue - 10m|496.5|53.0|1|
|B03|green|Band 3 - Green - 10m|560.0|34.0|2|
|B04|red|Band 4 - Red - 10m|664.5|29.0|3|
|B05|red edge 1|Band 5 - Vegetation red edge 1 - 20m|704.5|13.0|4|
|B06|red edge 2|Band 6 - Vegetation red edge 2 - 20m|740.5|13.0|5|
|B07|red edge 3|Band 7 - Vegetation red edge 3 - 20m|783.0|18.0|6|
|B08|NIR|Band 8 - Near infrared - 10m|840.0|114.0|7|
|B8A|red edge 4|Band 8A - Vegetation red edge 4 - 20m|864.5|19.0|8|
|B09|water vapor|Band 9 - Water vapor - 60m|945.0|18.0|9|
|B10|cirrus|Band 10 - Cirrus - 60m|1375.5|31.0|10|
|B11|SWIR 1|Band 11 - Shortwave infrared 1 - 20m|1613.5|89.0|11|
|B12|SWIR 2|Band 12 - Shortwave infrared 2 - 20m|2199.5|173.0|12|
| jfloresf/mlstac-demo | [
"language:en",
"clouds",
"sentinel-2",
"image-segmentation",
"deep-learning",
"remote-sensing",
"region:us"
]
| 2023-11-13T15:10:39+00:00 | {"language": ["en"], "pretty_name": "cloudsen12", "tags": ["clouds", "sentinel-2", "image-segmentation", "deep-learning", "remote-sensing"]} | 2023-11-13T15:22:49+00:00 | []
| [
"en"
]
| TAGS
#language-English #clouds #sentinel-2 #image-segmentation #deep-learning #remote-sensing #region-us
| cloudsen12
==========
*''A dataset about clouds from Sentinel-2''*
CloudSEN12 is a LARGE dataset (~1 TB) for cloud semantic understanding that consists of 49,400 image patches (IP) that are evenly spread throughout all continents except Antarctica. Each IP covers 5090 x 5090 meters and contains data from Sentinel-2 levels 1C and 2A, hand-crafted annotations of thick and thin clouds and cloud shadows, Sentinel-1 Synthetic Aperture Radar (SAR), digital elevation model, surface water occurrence, land cover classes, and cloud mask results from six cutting-edge cloud detection algorithms.
CloudSEN12 is designed to support both weakly and self-/semi-supervised learning strategies by including three distinct forms of hand-crafted labeling data: high-quality, scribble and no-annotation. For more details on how we created the dataset see our paper: CloudSEN12 - a global dataset for semantic understanding of cloud and cloud shadow in Sentinel-2.
ML-STAC Snippet

Sensor: Sentinel2 - MSI
ML-STAC Task: TensorToTensor, TensorSegmentation
Data raw repository: URL
Dataset discussion: URL
Review mean score: 5.0
Split\_strategy: random
Paper: URL
Data Providers
--------------
Curators
--------
Reviewers
---------
Labels
------
Dimensions
----------
### input
### target
Spectral Bands
--------------
| [
"### input",
"### target\n\n\n\nSpectral Bands\n--------------"
]
| [
"TAGS\n#language-English #clouds #sentinel-2 #image-segmentation #deep-learning #remote-sensing #region-us \n",
"### input",
"### target\n\n\n\nSpectral Bands\n--------------"
]
| [
33,
3,
9
]
| [
"passage: TAGS\n#language-English #clouds #sentinel-2 #image-segmentation #deep-learning #remote-sensing #region-us \n### input### target\n\n\n\nSpectral Bands\n--------------"
]
|
de717d9c4c1322223d1c800458987c87d53ed346 | # Dataset Card for "mistral-intent-data-1615"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pankajemplay/mistral-intent-data-1615 | [
"region:us"
]
| 2023-11-13T15:13:13+00:00 | {"dataset_info": {"features": [{"name": "User Query", "dtype": "string"}, {"name": "Intent", "dtype": "string"}, {"name": "id type", "dtype": "string"}, {"name": "id value", "dtype": "string"}, {"name": "id slot filled", "dtype": "bool"}, {"name": "Task", "dtype": "string"}, {"name": "task slot filled", "dtype": "bool"}, {"name": "Bot Response", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1203158, "num_examples": 1615}], "download_size": 257325, "dataset_size": 1203158}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T15:13:16+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mistral-intent-data-1615"
More Information needed | [
"# Dataset Card for \"mistral-intent-data-1615\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mistral-intent-data-1615\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mistral-intent-data-1615\"\n\nMore Information needed"
]
|
469bb47634a78952e21d53e39fd7442d8b3bb112 | # Dataset Card for "death_marriage_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sreejith8100/death_marriage_data | [
"region:us"
]
| 2023-11-13T15:14:28+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "death", "1": "marriage"}}}}], "splits": [{"name": "train", "num_bytes": 579589900.0, "num_examples": 448}, {"name": "test", "num_bytes": 13589304.0, "num_examples": 20}], "download_size": 593212683, "dataset_size": 593179204.0}} | 2023-11-13T15:17:23+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "death_marriage_data"
More Information needed | [
"# Dataset Card for \"death_marriage_data\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"death_marriage_data\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"death_marriage_data\"\n\nMore Information needed"
]
|
87eb6a0978bf82de10e6c2acd7f555e4f493a6d8 | # Dataset Card for "death_marriage_data2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sreejith8100/death_marriage_data2 | [
"region:us"
]
| 2023-11-13T15:18:31+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "death", "1": "marriage"}}}}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 579607385.0, "num_examples": 448}, {"name": "test", "num_bytes": 13590074.0, "num_examples": 20}], "download_size": 593216583, "dataset_size": 593197459.0}} | 2023-11-13T15:21:00+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "death_marriage_data2"
More Information needed | [
"# Dataset Card for \"death_marriage_data2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"death_marriage_data2\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"death_marriage_data2\"\n\nMore Information needed"
]
|
626e09a0c118abb94cde0b6d9e7f2b375043d58b |
# Dataset Card for Evaluation run of postbot/emailgen-pythia-410m-deduped
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/postbot/emailgen-pythia-410m-deduped
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [postbot/emailgen-pythia-410m-deduped](https://huggingface.co/postbot/emailgen-pythia-410m-deduped) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_postbot__emailgen-pythia-410m-deduped_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-13T15:24:35.622872](https://huggingface.co/datasets/open-llm-leaderboard/details_postbot__emailgen-pythia-410m-deduped_public/blob/main/results_2023-11-13T15-24-35.622872.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2739821268942055,
"acc_stderr": 0.031358822799769724,
"acc_norm": 0.2757926465489037,
"acc_norm_stderr": 0.03219166127988676,
"mc1": 0.22276621787025705,
"mc1_stderr": 0.01456650696139673,
"mc2": 0.3819742528315203,
"mc2_stderr": 0.015246089965112817,
"em": 0.00020973154362416107,
"em_stderr": 0.00014829481977280738,
"f1": 0.009905620805369138,
"f1_stderr": 0.0005041998138971091
},
"harness|arc:challenge|25": {
"acc": 0.2593856655290102,
"acc_stderr": 0.012808273573927102,
"acc_norm": 0.2790102389078498,
"acc_norm_stderr": 0.013106784883601333
},
"harness|hellaswag|10": {
"acc": 0.34027086237801235,
"acc_stderr": 0.004728318577835236,
"acc_norm": 0.4004182433778132,
"acc_norm_stderr": 0.00488981748973969
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.2518518518518518,
"acc_stderr": 0.037498507091740234,
"acc_norm": 0.2518518518518518,
"acc_norm_stderr": 0.037498507091740234
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.2894736842105263,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.2894736842105263,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.33584905660377357,
"acc_stderr": 0.029067220146644826,
"acc_norm": 0.33584905660377357,
"acc_norm_stderr": 0.029067220146644826
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.036539469694421,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.036539469694421
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001976,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001976
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.2658959537572254,
"acc_stderr": 0.033687629322594316,
"acc_norm": 0.2658959537572254,
"acc_norm_stderr": 0.033687629322594316
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3137254901960784,
"acc_stderr": 0.04617034827006717,
"acc_norm": 0.3137254901960784,
"acc_norm_stderr": 0.04617034827006717
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.21,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.21,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2978723404255319,
"acc_stderr": 0.029896145682095455,
"acc_norm": 0.2978723404255319,
"acc_norm_stderr": 0.029896145682095455
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.23684210526315788,
"acc_stderr": 0.039994238792813344,
"acc_norm": 0.23684210526315788,
"acc_norm_stderr": 0.039994238792813344
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2206896551724138,
"acc_stderr": 0.034559302019248096,
"acc_norm": 0.2206896551724138,
"acc_norm_stderr": 0.034559302019248096
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.25132275132275134,
"acc_stderr": 0.022340482339643895,
"acc_norm": 0.25132275132275134,
"acc_norm_stderr": 0.022340482339643895
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04216370213557836,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04216370213557836
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.22903225806451613,
"acc_stderr": 0.02390491431178265,
"acc_norm": 0.22903225806451613,
"acc_norm_stderr": 0.02390491431178265
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.27586206896551724,
"acc_stderr": 0.031447125816782426,
"acc_norm": 0.27586206896551724,
"acc_norm_stderr": 0.031447125816782426
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.26666666666666666,
"acc_stderr": 0.03453131801885415,
"acc_norm": 0.26666666666666666,
"acc_norm_stderr": 0.03453131801885415
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.3181818181818182,
"acc_stderr": 0.03318477333845331,
"acc_norm": 0.3181818181818182,
"acc_norm_stderr": 0.03318477333845331
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.35751295336787564,
"acc_stderr": 0.034588160421810045,
"acc_norm": 0.35751295336787564,
"acc_norm_stderr": 0.034588160421810045
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.36153846153846153,
"acc_stderr": 0.024359581465396987,
"acc_norm": 0.36153846153846153,
"acc_norm_stderr": 0.024359581465396987
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2740740740740741,
"acc_stderr": 0.027195934804085622,
"acc_norm": 0.2740740740740741,
"acc_norm_stderr": 0.027195934804085622
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.3445378151260504,
"acc_stderr": 0.03086868260412163,
"acc_norm": 0.3445378151260504,
"acc_norm_stderr": 0.03086868260412163
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.038020397601079024,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.038020397601079024
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.344954128440367,
"acc_stderr": 0.02038060540506697,
"acc_norm": 0.344954128440367,
"acc_norm_stderr": 0.02038060540506697
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4166666666666667,
"acc_stderr": 0.033622774366080424,
"acc_norm": 0.4166666666666667,
"acc_norm_stderr": 0.033622774366080424
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.2549019607843137,
"acc_stderr": 0.03058759135160425,
"acc_norm": 0.2549019607843137,
"acc_norm_stderr": 0.03058759135160425
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2109704641350211,
"acc_stderr": 0.02655837250266192,
"acc_norm": 0.2109704641350211,
"acc_norm_stderr": 0.02655837250266192
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.12556053811659193,
"acc_stderr": 0.022238985469323774,
"acc_norm": 0.12556053811659193,
"acc_norm_stderr": 0.022238985469323774
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.366412213740458,
"acc_stderr": 0.04225875451969638,
"acc_norm": 0.366412213740458,
"acc_norm_stderr": 0.04225875451969638
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.23140495867768596,
"acc_stderr": 0.0384985609879409,
"acc_norm": 0.23140495867768596,
"acc_norm_stderr": 0.0384985609879409
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.23148148148148148,
"acc_stderr": 0.04077494709252628,
"acc_norm": 0.23148148148148148,
"acc_norm_stderr": 0.04077494709252628
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.25766871165644173,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.25766871165644173,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.15178571428571427,
"acc_stderr": 0.034057028381856945,
"acc_norm": 0.15178571428571427,
"acc_norm_stderr": 0.034057028381856945
},
"harness|hendrycksTest-management|5": {
"acc": 0.36893203883495146,
"acc_stderr": 0.047776151811567386,
"acc_norm": 0.36893203883495146,
"acc_norm_stderr": 0.047776151811567386
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.21367521367521367,
"acc_stderr": 0.026853450377009154,
"acc_norm": 0.21367521367521367,
"acc_norm_stderr": 0.026853450377009154
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.22988505747126436,
"acc_stderr": 0.015046301846691807,
"acc_norm": 0.22988505747126436,
"acc_norm_stderr": 0.015046301846691807
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.21098265895953758,
"acc_stderr": 0.021966309947043117,
"acc_norm": 0.21098265895953758,
"acc_norm_stderr": 0.021966309947043117
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.27150837988826815,
"acc_stderr": 0.014874252168095273,
"acc_norm": 0.27150837988826815,
"acc_norm_stderr": 0.014874252168095273
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.025261691219729498,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.025261691219729498
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2347266881028939,
"acc_stderr": 0.024071805887677045,
"acc_norm": 0.2347266881028939,
"acc_norm_stderr": 0.024071805887677045
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2345679012345679,
"acc_stderr": 0.023576881744005705,
"acc_norm": 0.2345679012345679,
"acc_norm_stderr": 0.023576881744005705
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.24113475177304963,
"acc_stderr": 0.02551873104953776,
"acc_norm": 0.24113475177304963,
"acc_norm_stderr": 0.02551873104953776
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.25097783572359844,
"acc_stderr": 0.01107373029918723,
"acc_norm": 0.25097783572359844,
"acc_norm_stderr": 0.01107373029918723
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4227941176470588,
"acc_stderr": 0.030008562845003476,
"acc_norm": 0.4227941176470588,
"acc_norm_stderr": 0.030008562845003476
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.24183006535947713,
"acc_stderr": 0.017322789207784326,
"acc_norm": 0.24183006535947713,
"acc_norm_stderr": 0.017322789207784326
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.19090909090909092,
"acc_stderr": 0.03764425585984926,
"acc_norm": 0.19090909090909092,
"acc_norm_stderr": 0.03764425585984926
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.4,
"acc_stderr": 0.031362502409358936,
"acc_norm": 0.4,
"acc_norm_stderr": 0.031362502409358936
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.2537313432835821,
"acc_stderr": 0.030769444967296028,
"acc_norm": 0.2537313432835821,
"acc_norm_stderr": 0.030769444967296028
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-virology|5": {
"acc": 0.25301204819277107,
"acc_stderr": 0.033844291552331346,
"acc_norm": 0.25301204819277107,
"acc_norm_stderr": 0.033844291552331346
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.22807017543859648,
"acc_stderr": 0.032180937956023566,
"acc_norm": 0.22807017543859648,
"acc_norm_stderr": 0.032180937956023566
},
"harness|truthfulqa:mc|0": {
"mc1": 0.22276621787025705,
"mc1_stderr": 0.01456650696139673,
"mc2": 0.3819742528315203,
"mc2_stderr": 0.015246089965112817
},
"harness|winogrande|5": {
"acc": 0.5209155485398579,
"acc_stderr": 0.014040185494212947
},
"harness|drop|3": {
"em": 0.00020973154362416107,
"em_stderr": 0.00014829481977280738,
"f1": 0.009905620805369138,
"f1_stderr": 0.0005041998138971091
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_postbot__emailgen-pythia-410m-deduped | [
"region:us"
]
| 2023-11-13T15:26:49+00:00 | {"pretty_name": "Evaluation run of postbot/emailgen-pythia-410m-deduped", "dataset_summary": "Dataset automatically created during the evaluation run of model [postbot/emailgen-pythia-410m-deduped](https://huggingface.co/postbot/emailgen-pythia-410m-deduped) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_postbot__emailgen-pythia-410m-deduped_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-13T15:24:35.622872](https://huggingface.co/datasets/open-llm-leaderboard/details_postbot__emailgen-pythia-410m-deduped_public/blob/main/results_2023-11-13T15-24-35.622872.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2739821268942055,\n \"acc_stderr\": 0.031358822799769724,\n \"acc_norm\": 0.2757926465489037,\n \"acc_norm_stderr\": 0.03219166127988676,\n \"mc1\": 0.22276621787025705,\n \"mc1_stderr\": 0.01456650696139673,\n \"mc2\": 0.3819742528315203,\n \"mc2_stderr\": 0.015246089965112817,\n \"em\": 0.00020973154362416107,\n \"em_stderr\": 0.00014829481977280738,\n \"f1\": 0.009905620805369138,\n \"f1_stderr\": 0.0005041998138971091\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.2593856655290102,\n \"acc_stderr\": 0.012808273573927102,\n \"acc_norm\": 0.2790102389078498,\n \"acc_norm_stderr\": 0.013106784883601333\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.34027086237801235,\n \"acc_stderr\": 0.004728318577835236,\n \"acc_norm\": 0.4004182433778132,\n \"acc_norm_stderr\": 0.00488981748973969\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2518518518518518,\n \"acc_stderr\": 0.037498507091740234,\n \"acc_norm\": 0.2518518518518518,\n \"acc_norm_stderr\": 0.037498507091740234\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.2894736842105263,\n \"acc_stderr\": 0.03690677986137283,\n \"acc_norm\": 0.2894736842105263,\n \"acc_norm_stderr\": 0.03690677986137283\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.33584905660377357,\n \"acc_stderr\": 0.029067220146644826,\n \"acc_norm\": 0.33584905660377357,\n \"acc_norm_stderr\": 0.029067220146644826\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n \"acc_stderr\": 0.036539469694421,\n \"acc_norm\": 0.2569444444444444,\n \"acc_norm_stderr\": 0.036539469694421\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001976,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001976\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2658959537572254,\n \"acc_stderr\": 0.033687629322594316,\n \"acc_norm\": 0.2658959537572254,\n \"acc_norm_stderr\": 0.033687629322594316\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3137254901960784,\n \"acc_stderr\": 0.04617034827006717,\n \"acc_norm\": 0.3137254901960784,\n \"acc_norm_stderr\": 0.04617034827006717\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.2978723404255319,\n \"acc_stderr\": 0.029896145682095455,\n \"acc_norm\": 0.2978723404255319,\n \"acc_norm_stderr\": 0.029896145682095455\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n \"acc_stderr\": 0.039994238792813344,\n \"acc_norm\": 0.23684210526315788,\n \"acc_norm_stderr\": 0.039994238792813344\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2206896551724138,\n \"acc_stderr\": 0.034559302019248096,\n \"acc_norm\": 0.2206896551724138,\n \"acc_norm_stderr\": 0.034559302019248096\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.25132275132275134,\n \"acc_stderr\": 0.022340482339643895,\n \"acc_norm\": 0.25132275132275134,\n \"acc_norm_stderr\": 0.022340482339643895\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04216370213557836,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04216370213557836\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.22903225806451613,\n \"acc_stderr\": 0.02390491431178265,\n \"acc_norm\": 0.22903225806451613,\n \"acc_norm_stderr\": 0.02390491431178265\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.27586206896551724,\n \"acc_stderr\": 0.031447125816782426,\n \"acc_norm\": 0.27586206896551724,\n \"acc_norm_stderr\": 0.031447125816782426\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.26666666666666666,\n \"acc_stderr\": 0.03453131801885415,\n \"acc_norm\": 0.26666666666666666,\n \"acc_norm_stderr\": 0.03453131801885415\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.3181818181818182,\n \"acc_stderr\": 0.03318477333845331,\n \"acc_norm\": 0.3181818181818182,\n \"acc_norm_stderr\": 0.03318477333845331\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.35751295336787564,\n \"acc_stderr\": 0.034588160421810045,\n \"acc_norm\": 0.35751295336787564,\n \"acc_norm_stderr\": 0.034588160421810045\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.36153846153846153,\n \"acc_stderr\": 0.024359581465396987,\n \"acc_norm\": 0.36153846153846153,\n \"acc_norm_stderr\": 0.024359581465396987\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2740740740740741,\n \"acc_stderr\": 0.027195934804085622,\n \"acc_norm\": 0.2740740740740741,\n \"acc_norm_stderr\": 0.027195934804085622\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.3445378151260504,\n \"acc_stderr\": 0.03086868260412163,\n \"acc_norm\": 0.3445378151260504,\n \"acc_norm_stderr\": 0.03086868260412163\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31788079470198677,\n \"acc_stderr\": 0.038020397601079024,\n \"acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.038020397601079024\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.344954128440367,\n \"acc_stderr\": 0.02038060540506697,\n \"acc_norm\": 0.344954128440367,\n \"acc_norm_stderr\": 0.02038060540506697\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4166666666666667,\n \"acc_stderr\": 0.033622774366080424,\n \"acc_norm\": 0.4166666666666667,\n \"acc_norm_stderr\": 0.033622774366080424\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.2549019607843137,\n \"acc_stderr\": 0.03058759135160425,\n \"acc_norm\": 0.2549019607843137,\n \"acc_norm_stderr\": 0.03058759135160425\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.2109704641350211,\n \"acc_stderr\": 0.02655837250266192,\n \"acc_norm\": 0.2109704641350211,\n \"acc_norm_stderr\": 0.02655837250266192\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.12556053811659193,\n \"acc_stderr\": 0.022238985469323774,\n \"acc_norm\": 0.12556053811659193,\n \"acc_norm_stderr\": 0.022238985469323774\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.366412213740458,\n \"acc_stderr\": 0.04225875451969638,\n \"acc_norm\": 0.366412213740458,\n \"acc_norm_stderr\": 0.04225875451969638\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.23140495867768596,\n \"acc_stderr\": 0.0384985609879409,\n \"acc_norm\": 0.23140495867768596,\n \"acc_norm_stderr\": 0.0384985609879409\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.23148148148148148,\n \"acc_stderr\": 0.04077494709252628,\n \"acc_norm\": 0.23148148148148148,\n \"acc_norm_stderr\": 0.04077494709252628\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.25766871165644173,\n \"acc_stderr\": 0.03436150827846917,\n \"acc_norm\": 0.25766871165644173,\n \"acc_norm_stderr\": 0.03436150827846917\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.15178571428571427,\n \"acc_stderr\": 0.034057028381856945,\n \"acc_norm\": 0.15178571428571427,\n \"acc_norm_stderr\": 0.034057028381856945\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.36893203883495146,\n \"acc_stderr\": 0.047776151811567386,\n \"acc_norm\": 0.36893203883495146,\n \"acc_norm_stderr\": 0.047776151811567386\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.21367521367521367,\n \"acc_stderr\": 0.026853450377009154,\n \"acc_norm\": 0.21367521367521367,\n \"acc_norm_stderr\": 0.026853450377009154\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.22988505747126436,\n \"acc_stderr\": 0.015046301846691807,\n \"acc_norm\": 0.22988505747126436,\n \"acc_norm_stderr\": 0.015046301846691807\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.21098265895953758,\n \"acc_stderr\": 0.021966309947043117,\n \"acc_norm\": 0.21098265895953758,\n \"acc_norm_stderr\": 0.021966309947043117\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.27150837988826815,\n \"acc_stderr\": 0.014874252168095273,\n \"acc_norm\": 0.27150837988826815,\n \"acc_norm_stderr\": 0.014874252168095273\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.025261691219729498,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.025261691219729498\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2347266881028939,\n \"acc_stderr\": 0.024071805887677045,\n \"acc_norm\": 0.2347266881028939,\n \"acc_norm_stderr\": 0.024071805887677045\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.2345679012345679,\n \"acc_stderr\": 0.023576881744005705,\n \"acc_norm\": 0.2345679012345679,\n \"acc_norm_stderr\": 0.023576881744005705\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.24113475177304963,\n \"acc_stderr\": 0.02551873104953776,\n \"acc_norm\": 0.24113475177304963,\n \"acc_norm_stderr\": 0.02551873104953776\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.25097783572359844,\n \"acc_stderr\": 0.01107373029918723,\n \"acc_norm\": 0.25097783572359844,\n \"acc_norm_stderr\": 0.01107373029918723\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.4227941176470588,\n \"acc_stderr\": 0.030008562845003476,\n \"acc_norm\": 0.4227941176470588,\n \"acc_norm_stderr\": 0.030008562845003476\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.24183006535947713,\n \"acc_stderr\": 0.017322789207784326,\n \"acc_norm\": 0.24183006535947713,\n \"acc_norm_stderr\": 0.017322789207784326\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.19090909090909092,\n \"acc_stderr\": 0.03764425585984926,\n \"acc_norm\": 0.19090909090909092,\n \"acc_norm_stderr\": 0.03764425585984926\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.031362502409358936,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.031362502409358936\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.2537313432835821,\n \"acc_stderr\": 0.030769444967296028,\n \"acc_norm\": 0.2537313432835821,\n \"acc_norm_stderr\": 0.030769444967296028\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.25301204819277107,\n \"acc_stderr\": 0.033844291552331346,\n \"acc_norm\": 0.25301204819277107,\n \"acc_norm_stderr\": 0.033844291552331346\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.22807017543859648,\n \"acc_stderr\": 0.032180937956023566,\n \"acc_norm\": 0.22807017543859648,\n \"acc_norm_stderr\": 0.032180937956023566\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.22276621787025705,\n \"mc1_stderr\": 0.01456650696139673,\n \"mc2\": 0.3819742528315203,\n \"mc2_stderr\": 0.015246089965112817\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5209155485398579,\n \"acc_stderr\": 0.014040185494212947\n },\n \"harness|drop|3\": {\n \"em\": 0.00020973154362416107,\n \"em_stderr\": 0.00014829481977280738,\n \"f1\": 0.009905620805369138,\n \"f1_stderr\": 0.0005041998138971091\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/postbot/emailgen-pythia-410m-deduped", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|drop|3_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-24-35.622872.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["**/details_harness|winogrande|5_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-13T15-24-35.622872.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_13T15_24_35.622872", "path": ["results_2023-11-13T15-24-35.622872.parquet"]}, {"split": "latest", "path": ["results_2023-11-13T15-24-35.622872.parquet"]}]}]} | 2023-11-13T15:27:36+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of postbot/emailgen-pythia-410m-deduped
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model postbot/emailgen-pythia-410m-deduped on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-13T15:24:35.622872(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of postbot/emailgen-pythia-410m-deduped",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model postbot/emailgen-pythia-410m-deduped on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T15:24:35.622872(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of postbot/emailgen-pythia-410m-deduped",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model postbot/emailgen-pythia-410m-deduped on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T15:24:35.622872(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
24,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of postbot/emailgen-pythia-410m-deduped## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model postbot/emailgen-pythia-410m-deduped on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-13T15:24:35.622872(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
a51bc6fefae97a9cd446e05f6d638b26aec627f2 |
# Dataset Card for Evaluation run of bofenghuang/vigostral-7b-chat
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/bofenghuang/vigostral-7b-chat
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [bofenghuang/vigostral-7b-chat](https://huggingface.co/bofenghuang/vigostral-7b-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bofenghuang__vigostral-7b-chat_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-13T15:29:27.357304](https://huggingface.co/datasets/open-llm-leaderboard/details_bofenghuang__vigostral-7b-chat_public/blob/main/results_2023-11-13T15-29-27.357304.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6295084211098857,
"acc_stderr": 0.03240910499451327,
"acc_norm": 0.6386954674519838,
"acc_norm_stderr": 0.03311457250909517,
"mc1": 0.32802937576499386,
"mc1_stderr": 0.01643563293281503,
"mc2": 0.49239586566553695,
"mc2_stderr": 0.014798235305508963,
"em": 0.06596057046979865,
"em_stderr": 0.0025419350983795505,
"f1": 0.13260171979865745,
"f1_stderr": 0.0027787818602447705
},
"harness|arc:challenge|25": {
"acc": 0.5921501706484642,
"acc_stderr": 0.0143610972884497,
"acc_norm": 0.6262798634812287,
"acc_norm_stderr": 0.014137708601759086
},
"harness|hellaswag|10": {
"acc": 0.6408086038637721,
"acc_stderr": 0.004787829168255654,
"acc_norm": 0.8433578968333001,
"acc_norm_stderr": 0.0036272018740533918
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.0421850621536888,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.0421850621536888
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6973684210526315,
"acc_stderr": 0.037385206761196686,
"acc_norm": 0.6973684210526315,
"acc_norm_stderr": 0.037385206761196686
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.690566037735849,
"acc_stderr": 0.028450154794118637,
"acc_norm": 0.690566037735849,
"acc_norm_stderr": 0.028450154794118637
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7013888888888888,
"acc_stderr": 0.03827052357950756,
"acc_norm": 0.7013888888888888,
"acc_norm_stderr": 0.03827052357950756
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6069364161849711,
"acc_stderr": 0.03724249595817731,
"acc_norm": 0.6069364161849711,
"acc_norm_stderr": 0.03724249595817731
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.04897104952726366,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.04897104952726366
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932261,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932261
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5574468085106383,
"acc_stderr": 0.032469569197899575,
"acc_norm": 0.5574468085106383,
"acc_norm_stderr": 0.032469569197899575
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5087719298245614,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.5087719298245614,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.0411391498118926,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.0411391498118926
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.025197101074246487,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.025197101074246487
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.0442626668137991,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.0442626668137991
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7709677419354839,
"acc_stderr": 0.023904914311782648,
"acc_norm": 0.7709677419354839,
"acc_norm_stderr": 0.023904914311782648
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.03374402644139403,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.03374402644139403
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.803030303030303,
"acc_stderr": 0.02833560973246336,
"acc_norm": 0.803030303030303,
"acc_norm_stderr": 0.02833560973246336
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8704663212435233,
"acc_stderr": 0.024233532297758733,
"acc_norm": 0.8704663212435233,
"acc_norm_stderr": 0.024233532297758733
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6512820512820513,
"acc_stderr": 0.02416278028401772,
"acc_norm": 0.6512820512820513,
"acc_norm_stderr": 0.02416278028401772
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3851851851851852,
"acc_stderr": 0.029670906124630886,
"acc_norm": 0.3851851851851852,
"acc_norm_stderr": 0.029670906124630886
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.030489911417673227,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.030489911417673227
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8220183486238533,
"acc_stderr": 0.016399436366612927,
"acc_norm": 0.8220183486238533,
"acc_norm_stderr": 0.016399436366612927
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.47685185185185186,
"acc_stderr": 0.03406315360711507,
"acc_norm": 0.47685185185185186,
"acc_norm_stderr": 0.03406315360711507
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8137254901960784,
"acc_stderr": 0.027325470966716312,
"acc_norm": 0.8137254901960784,
"acc_norm_stderr": 0.027325470966716312
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7637130801687764,
"acc_stderr": 0.027652153144159263,
"acc_norm": 0.7637130801687764,
"acc_norm_stderr": 0.027652153144159263
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.672645739910314,
"acc_stderr": 0.03149384670994131,
"acc_norm": 0.672645739910314,
"acc_norm_stderr": 0.03149384670994131
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.03547771004159464,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.03547771004159464
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228733,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228733
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.042844679680521934,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.042844679680521934
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742179,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742179
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4732142857142857,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.4732142857142857,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.04058042015646034,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.04058042015646034
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8589743589743589,
"acc_stderr": 0.022801382534597528,
"acc_norm": 0.8589743589743589,
"acc_norm_stderr": 0.022801382534597528
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8058748403575989,
"acc_stderr": 0.014143970276657569,
"acc_norm": 0.8058748403575989,
"acc_norm_stderr": 0.014143970276657569
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6965317919075145,
"acc_stderr": 0.02475241196091721,
"acc_norm": 0.6965317919075145,
"acc_norm_stderr": 0.02475241196091721
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3039106145251397,
"acc_stderr": 0.015382845587584518,
"acc_norm": 0.3039106145251397,
"acc_norm_stderr": 0.015382845587584518
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7549019607843137,
"acc_stderr": 0.024630048979824775,
"acc_norm": 0.7549019607843137,
"acc_norm_stderr": 0.024630048979824775
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7009646302250804,
"acc_stderr": 0.02600330111788514,
"acc_norm": 0.7009646302250804,
"acc_norm_stderr": 0.02600330111788514
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.024659685185967284,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.024659685185967284
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4452411994784876,
"acc_stderr": 0.012693421303973294,
"acc_norm": 0.4452411994784876,
"acc_norm_stderr": 0.012693421303973294
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6654411764705882,
"acc_stderr": 0.028661996202335303,
"acc_norm": 0.6654411764705882,
"acc_norm_stderr": 0.028661996202335303
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6683006535947712,
"acc_stderr": 0.019047485239360378,
"acc_norm": 0.6683006535947712,
"acc_norm_stderr": 0.019047485239360378
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.02812342933514278,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.02812342933514278
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616914,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616914
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.03379976689896308,
"acc_norm": 0.87,
"acc_norm_stderr": 0.03379976689896308
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5421686746987951,
"acc_stderr": 0.0387862677100236,
"acc_norm": 0.5421686746987951,
"acc_norm_stderr": 0.0387862677100236
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7953216374269005,
"acc_stderr": 0.030944459778533207,
"acc_norm": 0.7953216374269005,
"acc_norm_stderr": 0.030944459778533207
},
"harness|truthfulqa:mc|0": {
"mc1": 0.32802937576499386,
"mc1_stderr": 0.01643563293281503,
"mc2": 0.49239586566553695,
"mc2_stderr": 0.014798235305508963
},
"harness|winogrande|5": {
"acc": 0.7861089187056038,
"acc_stderr": 0.01152446695409025
},
"harness|drop|3": {
"em": 0.06596057046979865,
"em_stderr": 0.0025419350983795505,
"f1": 0.13260171979865745,
"f1_stderr": 0.0027787818602447705
},
"harness|gsm8k|5": {
"acc": 0.16755117513267628,
"acc_stderr": 0.01028714369371122
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_bofenghuang__vigostral-7b-chat | [
"region:us"
]
| 2023-11-13T15:32:26+00:00 | {"pretty_name": "Evaluation run of bofenghuang/vigostral-7b-chat", "dataset_summary": "Dataset automatically created during the evaluation run of model [bofenghuang/vigostral-7b-chat](https://huggingface.co/bofenghuang/vigostral-7b-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bofenghuang__vigostral-7b-chat_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-13T15:29:27.357304](https://huggingface.co/datasets/open-llm-leaderboard/details_bofenghuang__vigostral-7b-chat_public/blob/main/results_2023-11-13T15-29-27.357304.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6295084211098857,\n \"acc_stderr\": 0.03240910499451327,\n \"acc_norm\": 0.6386954674519838,\n \"acc_norm_stderr\": 0.03311457250909517,\n \"mc1\": 0.32802937576499386,\n \"mc1_stderr\": 0.01643563293281503,\n \"mc2\": 0.49239586566553695,\n \"mc2_stderr\": 0.014798235305508963,\n \"em\": 0.06596057046979865,\n \"em_stderr\": 0.0025419350983795505,\n \"f1\": 0.13260171979865745,\n \"f1_stderr\": 0.0027787818602447705\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5921501706484642,\n \"acc_stderr\": 0.0143610972884497,\n \"acc_norm\": 0.6262798634812287,\n \"acc_norm_stderr\": 0.014137708601759086\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6408086038637721,\n \"acc_stderr\": 0.004787829168255654,\n \"acc_norm\": 0.8433578968333001,\n \"acc_norm_stderr\": 0.0036272018740533918\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n \"acc_stderr\": 0.0421850621536888,\n \"acc_norm\": 0.6074074074074074,\n \"acc_norm_stderr\": 0.0421850621536888\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.037385206761196686,\n \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.037385206761196686\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7013888888888888,\n \"acc_stderr\": 0.03827052357950756,\n \"acc_norm\": 0.7013888888888888,\n \"acc_norm_stderr\": 0.03827052357950756\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6069364161849711,\n \"acc_stderr\": 0.03724249595817731,\n \"acc_norm\": 0.6069364161849711,\n \"acc_norm_stderr\": 0.03724249595817731\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.04897104952726366,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.04897104952726366\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932261,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932261\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5574468085106383,\n \"acc_stderr\": 0.032469569197899575,\n \"acc_norm\": 0.5574468085106383,\n \"acc_norm_stderr\": 0.032469569197899575\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3968253968253968,\n \"acc_stderr\": 0.025197101074246487,\n \"acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.025197101074246487\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.0442626668137991,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.0442626668137991\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7709677419354839,\n \"acc_stderr\": 0.023904914311782648,\n \"acc_norm\": 0.7709677419354839,\n \"acc_norm_stderr\": 0.023904914311782648\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.03374402644139403,\n \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.03374402644139403\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.803030303030303,\n \"acc_stderr\": 0.02833560973246336,\n \"acc_norm\": 0.803030303030303,\n \"acc_norm_stderr\": 0.02833560973246336\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8704663212435233,\n \"acc_stderr\": 0.024233532297758733,\n \"acc_norm\": 0.8704663212435233,\n \"acc_norm_stderr\": 0.024233532297758733\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6512820512820513,\n \"acc_stderr\": 0.02416278028401772,\n \"acc_norm\": 0.6512820512820513,\n \"acc_norm_stderr\": 0.02416278028401772\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3851851851851852,\n \"acc_stderr\": 0.029670906124630886,\n \"acc_norm\": 0.3851851851851852,\n \"acc_norm_stderr\": 0.029670906124630886\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.030489911417673227,\n \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.030489911417673227\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8220183486238533,\n \"acc_stderr\": 0.016399436366612927,\n \"acc_norm\": 0.8220183486238533,\n \"acc_norm_stderr\": 0.016399436366612927\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.47685185185185186,\n \"acc_stderr\": 0.03406315360711507,\n \"acc_norm\": 0.47685185185185186,\n \"acc_norm_stderr\": 0.03406315360711507\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8137254901960784,\n \"acc_stderr\": 0.027325470966716312,\n \"acc_norm\": 0.8137254901960784,\n \"acc_norm_stderr\": 0.027325470966716312\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7637130801687764,\n \"acc_stderr\": 0.027652153144159263,\n \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.027652153144159263\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.672645739910314,\n \"acc_stderr\": 0.03149384670994131,\n \"acc_norm\": 0.672645739910314,\n \"acc_norm_stderr\": 0.03149384670994131\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159464,\n \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159464\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.042844679680521934,\n \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.042844679680521934\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742179,\n \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742179\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.04058042015646034,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.04058042015646034\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n \"acc_stderr\": 0.022801382534597528,\n \"acc_norm\": 0.8589743589743589,\n \"acc_norm_stderr\": 0.022801382534597528\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8058748403575989,\n \"acc_stderr\": 0.014143970276657569,\n \"acc_norm\": 0.8058748403575989,\n \"acc_norm_stderr\": 0.014143970276657569\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6965317919075145,\n \"acc_stderr\": 0.02475241196091721,\n \"acc_norm\": 0.6965317919075145,\n \"acc_norm_stderr\": 0.02475241196091721\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3039106145251397,\n \"acc_stderr\": 0.015382845587584518,\n \"acc_norm\": 0.3039106145251397,\n \"acc_norm_stderr\": 0.015382845587584518\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7549019607843137,\n \"acc_stderr\": 0.024630048979824775,\n \"acc_norm\": 0.7549019607843137,\n \"acc_norm_stderr\": 0.024630048979824775\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n \"acc_stderr\": 0.02600330111788514,\n \"acc_norm\": 0.7009646302250804,\n \"acc_norm_stderr\": 0.02600330111788514\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.024659685185967284,\n \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.024659685185967284\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4452411994784876,\n \"acc_stderr\": 0.012693421303973294,\n \"acc_norm\": 0.4452411994784876,\n \"acc_norm_stderr\": 0.012693421303973294\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6654411764705882,\n \"acc_stderr\": 0.028661996202335303,\n \"acc_norm\": 0.6654411764705882,\n \"acc_norm_stderr\": 0.028661996202335303\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6683006535947712,\n \"acc_stderr\": 0.019047485239360378,\n \"acc_norm\": 0.6683006535947712,\n \"acc_norm_stderr\": 0.019047485239360378\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.02812342933514278,\n \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.02812342933514278\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616914,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616914\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.03379976689896308,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.03379976689896308\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7953216374269005,\n \"acc_stderr\": 0.030944459778533207,\n \"acc_norm\": 0.7953216374269005,\n \"acc_norm_stderr\": 0.030944459778533207\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.32802937576499386,\n \"mc1_stderr\": 0.01643563293281503,\n \"mc2\": 0.49239586566553695,\n \"mc2_stderr\": 0.014798235305508963\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7861089187056038,\n \"acc_stderr\": 0.01152446695409025\n },\n \"harness|drop|3\": {\n \"em\": 0.06596057046979865,\n \"em_stderr\": 0.0025419350983795505,\n \"f1\": 0.13260171979865745,\n \"f1_stderr\": 0.0027787818602447705\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.16755117513267628,\n \"acc_stderr\": 0.01028714369371122\n }\n}\n```", "repo_url": "https://huggingface.co/bofenghuang/vigostral-7b-chat", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|drop|3_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-29-27.357304.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["**/details_harness|winogrande|5_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-13T15-29-27.357304.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_13T15_29_27.357304", "path": ["results_2023-11-13T15-29-27.357304.parquet"]}, {"split": "latest", "path": ["results_2023-11-13T15-29-27.357304.parquet"]}]}]} | 2023-11-13T15:33:15+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of bofenghuang/vigostral-7b-chat
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model bofenghuang/vigostral-7b-chat on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-13T15:29:27.357304(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of bofenghuang/vigostral-7b-chat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model bofenghuang/vigostral-7b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T15:29:27.357304(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of bofenghuang/vigostral-7b-chat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model bofenghuang/vigostral-7b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T15:29:27.357304(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
22,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of bofenghuang/vigostral-7b-chat## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model bofenghuang/vigostral-7b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-13T15:29:27.357304(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
475f1b3a610ab399e44475b9a0b66f4c958425a8 |
# Dataset Card for Evaluation run of Norquinal/Mistral-7B-claude-instruct
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Norquinal/Mistral-7B-claude-instruct
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Norquinal/Mistral-7B-claude-instruct](https://huggingface.co/Norquinal/Mistral-7B-claude-instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Norquinal__Mistral-7B-claude-instruct_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-13T15:34:36.635642](https://huggingface.co/datasets/open-llm-leaderboard/details_Norquinal__Mistral-7B-claude-instruct_public/blob/main/results_2023-11-13T15-34-36.635642.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6328012974181245,
"acc_stderr": 0.032347704149397305,
"acc_norm": 0.6418533753559277,
"acc_norm_stderr": 0.03304428598840875,
"mc1": 0.32313341493268055,
"mc1_stderr": 0.0163718362864546,
"mc2": 0.4747061071538381,
"mc2_stderr": 0.014816247527686706,
"em": 0.0014681208053691276,
"em_stderr": 0.0003921042190298484,
"f1": 0.06348154362416109,
"f1_stderr": 0.0013886897198441997
},
"harness|arc:challenge|25": {
"acc": 0.6023890784982935,
"acc_stderr": 0.01430175222327954,
"acc_norm": 0.6322525597269625,
"acc_norm_stderr": 0.014090995618168484
},
"harness|hellaswag|10": {
"acc": 0.6502688707428799,
"acc_stderr": 0.00475910343238076,
"acc_norm": 0.8499302927703645,
"acc_norm_stderr": 0.003564098420387769
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6710526315789473,
"acc_stderr": 0.03823428969926604,
"acc_norm": 0.6710526315789473,
"acc_norm_stderr": 0.03823428969926604
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.690566037735849,
"acc_stderr": 0.028450154794118637,
"acc_norm": 0.690566037735849,
"acc_norm_stderr": 0.028450154794118637
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7152777777777778,
"acc_stderr": 0.037738099906869334,
"acc_norm": 0.7152777777777778,
"acc_norm_stderr": 0.037738099906869334
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.035676037996391706,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.035676037996391706
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.048580835742663454,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.048580835742663454
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932262,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932262
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5872340425531914,
"acc_stderr": 0.03218471141400351,
"acc_norm": 0.5872340425531914,
"acc_norm_stderr": 0.03218471141400351
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5175438596491229,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.5175438596491229,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482758,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482758
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.025279850397404904,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.025279850397404904
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.04415438226743744,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.04415438226743744
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7774193548387097,
"acc_stderr": 0.023664216671642518,
"acc_norm": 0.7774193548387097,
"acc_norm_stderr": 0.023664216671642518
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.541871921182266,
"acc_stderr": 0.03505630140785741,
"acc_norm": 0.541871921182266,
"acc_norm_stderr": 0.03505630140785741
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.03401506715249039,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.03401506715249039
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7828282828282829,
"acc_stderr": 0.029376616484945633,
"acc_norm": 0.7828282828282829,
"acc_norm_stderr": 0.029376616484945633
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8652849740932642,
"acc_stderr": 0.02463978909770944,
"acc_norm": 0.8652849740932642,
"acc_norm_stderr": 0.02463978909770944
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6333333333333333,
"acc_stderr": 0.02443301646605246,
"acc_norm": 0.6333333333333333,
"acc_norm_stderr": 0.02443301646605246
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.37777777777777777,
"acc_stderr": 0.029560707392465718,
"acc_norm": 0.37777777777777777,
"acc_norm_stderr": 0.029560707392465718
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6596638655462185,
"acc_stderr": 0.030778057422931673,
"acc_norm": 0.6596638655462185,
"acc_norm_stderr": 0.030778057422931673
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242742,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242742
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8110091743119267,
"acc_stderr": 0.016785481159203627,
"acc_norm": 0.8110091743119267,
"acc_norm_stderr": 0.016785481159203627
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5509259259259259,
"acc_stderr": 0.033922384053216174,
"acc_norm": 0.5509259259259259,
"acc_norm_stderr": 0.033922384053216174
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7745098039215687,
"acc_stderr": 0.02933116229425174,
"acc_norm": 0.7745098039215687,
"acc_norm_stderr": 0.02933116229425174
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7637130801687764,
"acc_stderr": 0.027652153144159256,
"acc_norm": 0.7637130801687764,
"acc_norm_stderr": 0.027652153144159256
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6681614349775785,
"acc_stderr": 0.03160295143776679,
"acc_norm": 0.6681614349775785,
"acc_norm_stderr": 0.03160295143776679
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.04236511258094633,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.04236511258094633
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.803680981595092,
"acc_stderr": 0.031207970394709218,
"acc_norm": 0.803680981595092,
"acc_norm_stderr": 0.031207970394709218
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.039891398595317706,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.039891398595317706
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.020588491316092382,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.020588491316092382
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8122605363984674,
"acc_stderr": 0.013964393769899133,
"acc_norm": 0.8122605363984674,
"acc_norm_stderr": 0.013964393769899133
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.708092485549133,
"acc_stderr": 0.024476994076247337,
"acc_norm": 0.708092485549133,
"acc_norm_stderr": 0.024476994076247337
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3463687150837989,
"acc_stderr": 0.015913546784020117,
"acc_norm": 0.3463687150837989,
"acc_norm_stderr": 0.015913546784020117
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7549019607843137,
"acc_stderr": 0.02463004897982478,
"acc_norm": 0.7549019607843137,
"acc_norm_stderr": 0.02463004897982478
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6945337620578779,
"acc_stderr": 0.026160584450140453,
"acc_norm": 0.6945337620578779,
"acc_norm_stderr": 0.026160584450140453
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.024922001168886335,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.024922001168886335
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.475177304964539,
"acc_stderr": 0.029790719243829714,
"acc_norm": 0.475177304964539,
"acc_norm_stderr": 0.029790719243829714
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4426336375488918,
"acc_stderr": 0.012685906538206242,
"acc_norm": 0.4426336375488918,
"acc_norm_stderr": 0.012685906538206242
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.028418208619406752,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.028418208619406752
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6683006535947712,
"acc_stderr": 0.019047485239360375,
"acc_norm": 0.6683006535947712,
"acc_norm_stderr": 0.019047485239360375
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7183673469387755,
"acc_stderr": 0.028795185574291286,
"acc_norm": 0.7183673469387755,
"acc_norm_stderr": 0.028795185574291286
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8208955223880597,
"acc_stderr": 0.027113286753111837,
"acc_norm": 0.8208955223880597,
"acc_norm_stderr": 0.027113286753111837
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.036845294917747115,
"acc_norm": 0.84,
"acc_norm_stderr": 0.036845294917747115
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.02796678585916089,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.02796678585916089
},
"harness|truthfulqa:mc|0": {
"mc1": 0.32313341493268055,
"mc1_stderr": 0.0163718362864546,
"mc2": 0.4747061071538381,
"mc2_stderr": 0.014816247527686706
},
"harness|winogrande|5": {
"acc": 0.7813733228097869,
"acc_stderr": 0.011616198215773239
},
"harness|drop|3": {
"em": 0.0014681208053691276,
"em_stderr": 0.0003921042190298484,
"f1": 0.06348154362416109,
"f1_stderr": 0.0013886897198441997
},
"harness|gsm8k|5": {
"acc": 0.17968157695223655,
"acc_stderr": 0.010575119964242251
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Norquinal__Mistral-7B-claude-instruct | [
"region:us"
]
| 2023-11-13T15:37:37+00:00 | {"pretty_name": "Evaluation run of Norquinal/Mistral-7B-claude-instruct", "dataset_summary": "Dataset automatically created during the evaluation run of model [Norquinal/Mistral-7B-claude-instruct](https://huggingface.co/Norquinal/Mistral-7B-claude-instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Norquinal__Mistral-7B-claude-instruct_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-13T15:34:36.635642](https://huggingface.co/datasets/open-llm-leaderboard/details_Norquinal__Mistral-7B-claude-instruct_public/blob/main/results_2023-11-13T15-34-36.635642.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6328012974181245,\n \"acc_stderr\": 0.032347704149397305,\n \"acc_norm\": 0.6418533753559277,\n \"acc_norm_stderr\": 0.03304428598840875,\n \"mc1\": 0.32313341493268055,\n \"mc1_stderr\": 0.0163718362864546,\n \"mc2\": 0.4747061071538381,\n \"mc2_stderr\": 0.014816247527686706,\n \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.0003921042190298484,\n \"f1\": 0.06348154362416109,\n \"f1_stderr\": 0.0013886897198441997\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6023890784982935,\n \"acc_stderr\": 0.01430175222327954,\n \"acc_norm\": 0.6322525597269625,\n \"acc_norm_stderr\": 0.014090995618168484\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6502688707428799,\n \"acc_stderr\": 0.00475910343238076,\n \"acc_norm\": 0.8499302927703645,\n \"acc_norm_stderr\": 0.003564098420387769\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.03823428969926604,\n \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.03823428969926604\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7152777777777778,\n \"acc_stderr\": 0.037738099906869334,\n \"acc_norm\": 0.7152777777777778,\n \"acc_norm_stderr\": 0.037738099906869334\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.035676037996391706,\n \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.035676037996391706\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.048580835742663454,\n \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.048580835742663454\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932262,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932262\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5872340425531914,\n \"acc_stderr\": 0.03218471141400351,\n \"acc_norm\": 0.5872340425531914,\n \"acc_norm_stderr\": 0.03218471141400351\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5175438596491229,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.5175438596491229,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482758,\n \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482758\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.025279850397404904,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.025279850397404904\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7774193548387097,\n \"acc_stderr\": 0.023664216671642518,\n \"acc_norm\": 0.7774193548387097,\n \"acc_norm_stderr\": 0.023664216671642518\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.541871921182266,\n \"acc_stderr\": 0.03505630140785741,\n \"acc_norm\": 0.541871921182266,\n \"acc_norm_stderr\": 0.03505630140785741\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7454545454545455,\n \"acc_stderr\": 0.03401506715249039,\n \"acc_norm\": 0.7454545454545455,\n \"acc_norm_stderr\": 0.03401506715249039\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.029376616484945633,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.029376616484945633\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8652849740932642,\n \"acc_stderr\": 0.02463978909770944,\n \"acc_norm\": 0.8652849740932642,\n \"acc_norm_stderr\": 0.02463978909770944\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6333333333333333,\n \"acc_stderr\": 0.02443301646605246,\n \"acc_norm\": 0.6333333333333333,\n \"acc_norm_stderr\": 0.02443301646605246\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37777777777777777,\n \"acc_stderr\": 0.029560707392465718,\n \"acc_norm\": 0.37777777777777777,\n \"acc_norm_stderr\": 0.029560707392465718\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6596638655462185,\n \"acc_stderr\": 0.030778057422931673,\n \"acc_norm\": 0.6596638655462185,\n \"acc_norm_stderr\": 0.030778057422931673\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8110091743119267,\n \"acc_stderr\": 0.016785481159203627,\n \"acc_norm\": 0.8110091743119267,\n \"acc_norm_stderr\": 0.016785481159203627\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5509259259259259,\n \"acc_stderr\": 0.033922384053216174,\n \"acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.033922384053216174\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7745098039215687,\n \"acc_stderr\": 0.02933116229425174,\n \"acc_norm\": 0.7745098039215687,\n \"acc_norm_stderr\": 0.02933116229425174\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7637130801687764,\n \"acc_stderr\": 0.027652153144159256,\n \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.027652153144159256\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n \"acc_stderr\": 0.03160295143776679,\n \"acc_norm\": 0.6681614349775785,\n \"acc_norm_stderr\": 0.03160295143776679\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.04236511258094633,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.04236511258094633\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.803680981595092,\n \"acc_stderr\": 0.031207970394709218,\n \"acc_norm\": 0.803680981595092,\n \"acc_norm_stderr\": 0.031207970394709218\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.020588491316092382,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.020588491316092382\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8122605363984674,\n \"acc_stderr\": 0.013964393769899133,\n \"acc_norm\": 0.8122605363984674,\n \"acc_norm_stderr\": 0.013964393769899133\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.708092485549133,\n \"acc_stderr\": 0.024476994076247337,\n \"acc_norm\": 0.708092485549133,\n \"acc_norm_stderr\": 0.024476994076247337\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3463687150837989,\n \"acc_stderr\": 0.015913546784020117,\n \"acc_norm\": 0.3463687150837989,\n \"acc_norm_stderr\": 0.015913546784020117\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7549019607843137,\n \"acc_stderr\": 0.02463004897982478,\n \"acc_norm\": 0.7549019607843137,\n \"acc_norm_stderr\": 0.02463004897982478\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6945337620578779,\n \"acc_stderr\": 0.026160584450140453,\n \"acc_norm\": 0.6945337620578779,\n \"acc_norm_stderr\": 0.026160584450140453\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.024922001168886335,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.024922001168886335\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.475177304964539,\n \"acc_stderr\": 0.029790719243829714,\n \"acc_norm\": 0.475177304964539,\n \"acc_norm_stderr\": 0.029790719243829714\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4426336375488918,\n \"acc_stderr\": 0.012685906538206242,\n \"acc_norm\": 0.4426336375488918,\n \"acc_norm_stderr\": 0.012685906538206242\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.028418208619406752,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.028418208619406752\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6683006535947712,\n \"acc_stderr\": 0.019047485239360375,\n \"acc_norm\": 0.6683006535947712,\n \"acc_norm_stderr\": 0.019047485239360375\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7183673469387755,\n \"acc_stderr\": 0.028795185574291286,\n \"acc_norm\": 0.7183673469387755,\n \"acc_norm_stderr\": 0.028795185574291286\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8208955223880597,\n \"acc_stderr\": 0.027113286753111837,\n \"acc_norm\": 0.8208955223880597,\n \"acc_norm_stderr\": 0.027113286753111837\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.036845294917747115,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.036845294917747115\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.02796678585916089,\n \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.02796678585916089\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.32313341493268055,\n \"mc1_stderr\": 0.0163718362864546,\n \"mc2\": 0.4747061071538381,\n \"mc2_stderr\": 0.014816247527686706\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7813733228097869,\n \"acc_stderr\": 0.011616198215773239\n },\n \"harness|drop|3\": {\n \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.0003921042190298484,\n \"f1\": 0.06348154362416109,\n \"f1_stderr\": 0.0013886897198441997\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.17968157695223655,\n \"acc_stderr\": 0.010575119964242251\n }\n}\n```", "repo_url": "https://huggingface.co/Norquinal/Mistral-7B-claude-instruct", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|drop|3_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-34-36.635642.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["**/details_harness|winogrande|5_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-13T15-34-36.635642.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_13T15_34_36.635642", "path": ["results_2023-11-13T15-34-36.635642.parquet"]}, {"split": "latest", "path": ["results_2023-11-13T15-34-36.635642.parquet"]}]}]} | 2023-11-13T15:38:24+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of Norquinal/Mistral-7B-claude-instruct
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Norquinal/Mistral-7B-claude-instruct on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-13T15:34:36.635642(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Norquinal/Mistral-7B-claude-instruct",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Norquinal/Mistral-7B-claude-instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T15:34:36.635642(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Norquinal/Mistral-7B-claude-instruct",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Norquinal/Mistral-7B-claude-instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T15:34:36.635642(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
23,
31,
172,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Norquinal/Mistral-7B-claude-instruct## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Norquinal/Mistral-7B-claude-instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-13T15:34:36.635642(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
fef54eb4238f61c149b66a21634f4ddbdf7a75c5 |
# Dataset Card for "Continuous Scale Meaning Dataset" (CSMD)
CSMD was created for [MeaningBERT: Assessing Meaning Preservation Between Sentences](https://www.frontiersin.org/articles/10.3389/frai.2023.1223924/full).
It contains 1,355 English text simplification meaning preservation annotations. Meaning preservation measures how well the meaning of the output text corresponds to the meaning of the source ([Saggion, 2017](https://link.springer.com/book/10.1007/978-3-031-02166-4)).
The annotations were taken from the following four datasets:
- [ASSET](https://aclanthology.org/2020.acl-main.424/)
- [QuestEVal](https://arxiv.org/abs/2104.07560),
- [SimpDa_2022](https://aclanthology.org/2023.acl-long.905.pdf) and,
- [Simplicity-DA](https://direct.mit.edu/coli/article/47/4/861/106930/The-Un-Suitability-of-Automatic-Evaluation-Metrics).
It contains a data augmentation subset of 1,355 identical sentence triplets and 1,355 unrelated sentence triplets (See the "Sanity Checks" section (3.3.) in our [article](https://www.frontiersin.org/articles/10.3389/frai.2023.1223924/full)).
It also contains two holdout subsets of 359 identical sentence triplets and 359 unrelated sentence triples (See the "MeaningBERT" section (3.4.) in our [article](https://www.frontiersin.org/articles/10.3389/frai.2023.1223924/full)).
## Dataset Structure
### Data Instances
- `Meaning` configuration: an instance consists of 1,355 meaning preservation triplets (Document, simplification, label).
- `meaning_with_data_augmentation` configuration: an instance consists of 1,355 meaning preservation triplets (Document, simplification, label) along with 1,355 data augmentation triplets (Document, Document, 1) and 1,355 data augmentation triplets (Document, Unrelated Document, 0) (See the sanity checks in our [article](https://www.frontiersin.org/articles/10.3389/frai.2023.1223924/full)).
- `meaning_holdout_identical` configuration: an instance consists of 359 meaning holdout preservation identical triplets (Document, Document, 1) based on the ASSET Simplification dataset.
- `meaning_holdout_unrelated` configuration: an instance consists of 359 meaning holdout preservation unrelated triplets (Document, Unrelated Document, 0) based on the ASSET Simplification dataset.
### Data Fields
- `original`: an original sentence from the source datasets.
- `simplification`: a simplification of the original obtained by an automated system or a human.
- `label`: a meaning preservation rating between 0 and 100.
### Data Splits
The split statistics of CSMD are given below.
| | Train | Dev | Test | Total |
| ------ | ------ | ------ | ---- | ----- |
| Meaning | 853 | 95 | 407 | 1,355 |
| Meaning With Data Augmentation | 2,560 | 285 | 1,220 | 4,065 |
| Meaning Holdout Identical | NA | NA | 359 | 359 |
| Meaning Holdout Unrelated | NA | NA | 359 | 359 |
All the splits are randomly split using a 60-10-30 split with the seed `42`.
# Citation Information
```
@ARTICLE{10.3389/frai.2023.1223924,
AUTHOR={Beauchemin, David and Saggion, Horacio and Khoury, Richard},
TITLE={{MeaningBERT: Assessing Meaning Preservation Between Sentences}},
JOURNAL={Frontiers in Artificial Intelligence},
VOLUME={6},
YEAR={2023},
URL={https://www.frontiersin.org/articles/10.3389/frai.2023.1223924},
DOI={10.3389/frai.2023.1223924},
ISSN={2624-8212},
}
``` | davebulaval/CSMD | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"multilinguality:monolingual",
"multilinguality:aligned",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|other-turkcorpus,other-asset,other-questeval,other-simplicity_da,other-simp_da",
"language:en",
"license:cc-by-4.0",
"simplification-evaluation",
"meaning-evaluation",
"arxiv:2104.07560",
"region:us"
]
| 2023-11-13T15:38:03+00:00 | {"language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual", "aligned"], "size_categories": ["1K<n<10K"], "source_datasets": ["original", "extended|other-turkcorpus,other-asset,other-questeval,other-simplicity_da,other-simp_da"], "task_categories": ["text-classification", "text2text-generation"], "pretty_name": "CSMD", "config_names": ["meaning", "meaning_with_data_augmentation", "meaning_holdout_identical", "meaning_holdout_unrelated"], "tags": ["simplification-evaluation", "meaning-evaluation"], "dataset_info": [{"config_name": "meaning", "features": [{"name": "original", "dtype": "string"}, {"name": "simplification", "dtype": "string"}, {"name": "label", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 251558, "num_examples": 853}, {"name": "dev", "num_bytes": 27794, "num_examples": 95}, {"name": "test", "num_bytes": 117686, "num_examples": 407}], "download_size": 397038, "dataset_size": 1355}, {"config_name": "meaning_with_data_augmentation", "features": [{"name": "original", "dtype": "string"}, {"name": "simplification", "dtype": "string"}, {"name": "label", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 1151604, "num_examples": 2560}, {"name": "dev", "num_bytes": 120991, "num_examples": 285}, {"name": "test", "num_bytes": 540844, "num_examples": 1220}], "download_size": 1813439, "dataset_size": 4065}, {"config_name": "meaning_holdout_identical", "features": [{"name": "original", "dtype": "string"}, {"name": "simplification", "dtype": "string"}, {"name": "label", "dtype": "float64"}], "splits": [{"name": "test", "num_bytes": 89866, "num_examples": 359}], "download_size": 89866, "dataset_size": 359}, {"config_name": "meaning_holdout_unrelated", "features": [{"name": "original", "dtype": "string"}, {"name": "simplification", "dtype": "string"}, {"name": "label", "dtype": "float64"}], "splits": [{"name": "test", "num_bytes": 247835, "num_examples": 359}], "download_size": 247835, "dataset_size": 359}], "viewer": true, "configs": [{"config_name": "meaning", "data_files": [{"split": "train", "path": "train.tsv"}, {"split": "dev", "path": "dev.tsv"}, {"split": "test", "path": "test.tsv"}]}, {"config_name": "meaning_with_data_augmentation", "data_files": [{"split": "train", "path": "train_da.tsv"}, {"split": "dev", "path": "dev_da.tsv"}, {"split": "test", "path": "test_da.tsv"}]}, {"config_name": "meaning_holdout_identical", "data_files": [{"split": "test", "path": "identical.tsv"}]}, {"config_name": "meaning_holdout_unrelated", "data_files": [{"split": "test", "path": "unrelated.tsv"}]}]} | 2023-11-21T18:38:13+00:00 | [
"2104.07560"
]
| [
"en"
]
| TAGS
#task_categories-text-classification #task_categories-text2text-generation #multilinguality-monolingual #multilinguality-aligned #size_categories-1K<n<10K #source_datasets-original #source_datasets-extended|other-turkcorpus,other-asset,other-questeval,other-simplicity_da,other-simp_da #language-English #license-cc-by-4.0 #simplification-evaluation #meaning-evaluation #arxiv-2104.07560 #region-us
| Dataset Card for "Continuous Scale Meaning Dataset" (CSMD)
==========================================================
CSMD was created for MeaningBERT: Assessing Meaning Preservation Between Sentences.
It contains 1,355 English text simplification meaning preservation annotations. Meaning preservation measures how well the meaning of the output text corresponds to the meaning of the source (Saggion, 2017).
The annotations were taken from the following four datasets:
* ASSET
* QuestEVal,
* SimpDa\_2022 and,
* Simplicity-DA.
It contains a data augmentation subset of 1,355 identical sentence triplets and 1,355 unrelated sentence triplets (See the "Sanity Checks" section (3.3.) in our article).
It also contains two holdout subsets of 359 identical sentence triplets and 359 unrelated sentence triples (See the "MeaningBERT" section (3.4.) in our article).
Dataset Structure
-----------------
### Data Instances
* 'Meaning' configuration: an instance consists of 1,355 meaning preservation triplets (Document, simplification, label).
* 'meaning\_with\_data\_augmentation' configuration: an instance consists of 1,355 meaning preservation triplets (Document, simplification, label) along with 1,355 data augmentation triplets (Document, Document, 1) and 1,355 data augmentation triplets (Document, Unrelated Document, 0) (See the sanity checks in our article).
* 'meaning\_holdout\_identical' configuration: an instance consists of 359 meaning holdout preservation identical triplets (Document, Document, 1) based on the ASSET Simplification dataset.
* 'meaning\_holdout\_unrelated' configuration: an instance consists of 359 meaning holdout preservation unrelated triplets (Document, Unrelated Document, 0) based on the ASSET Simplification dataset.
### Data Fields
* 'original': an original sentence from the source datasets.
* 'simplification': a simplification of the original obtained by an automated system or a human.
* 'label': a meaning preservation rating between 0 and 100.
### Data Splits
The split statistics of CSMD are given below.
All the splits are randomly split using a 60-10-30 split with the seed '42'.
| [
"### Data Instances\n\n\n* 'Meaning' configuration: an instance consists of 1,355 meaning preservation triplets (Document, simplification, label).\n* 'meaning\\_with\\_data\\_augmentation' configuration: an instance consists of 1,355 meaning preservation triplets (Document, simplification, label) along with 1,355 data augmentation triplets (Document, Document, 1) and 1,355 data augmentation triplets (Document, Unrelated Document, 0) (See the sanity checks in our article).\n* 'meaning\\_holdout\\_identical' configuration: an instance consists of 359 meaning holdout preservation identical triplets (Document, Document, 1) based on the ASSET Simplification dataset.\n* 'meaning\\_holdout\\_unrelated' configuration: an instance consists of 359 meaning holdout preservation unrelated triplets (Document, Unrelated Document, 0) based on the ASSET Simplification dataset.",
"### Data Fields\n\n\n* 'original': an original sentence from the source datasets.\n* 'simplification': a simplification of the original obtained by an automated system or a human.\n* 'label': a meaning preservation rating between 0 and 100.",
"### Data Splits\n\n\nThe split statistics of CSMD are given below.\n\n\n\nAll the splits are randomly split using a 60-10-30 split with the seed '42'."
]
| [
"TAGS\n#task_categories-text-classification #task_categories-text2text-generation #multilinguality-monolingual #multilinguality-aligned #size_categories-1K<n<10K #source_datasets-original #source_datasets-extended|other-turkcorpus,other-asset,other-questeval,other-simplicity_da,other-simp_da #language-English #license-cc-by-4.0 #simplification-evaluation #meaning-evaluation #arxiv-2104.07560 #region-us \n",
"### Data Instances\n\n\n* 'Meaning' configuration: an instance consists of 1,355 meaning preservation triplets (Document, simplification, label).\n* 'meaning\\_with\\_data\\_augmentation' configuration: an instance consists of 1,355 meaning preservation triplets (Document, simplification, label) along with 1,355 data augmentation triplets (Document, Document, 1) and 1,355 data augmentation triplets (Document, Unrelated Document, 0) (See the sanity checks in our article).\n* 'meaning\\_holdout\\_identical' configuration: an instance consists of 359 meaning holdout preservation identical triplets (Document, Document, 1) based on the ASSET Simplification dataset.\n* 'meaning\\_holdout\\_unrelated' configuration: an instance consists of 359 meaning holdout preservation unrelated triplets (Document, Unrelated Document, 0) based on the ASSET Simplification dataset.",
"### Data Fields\n\n\n* 'original': an original sentence from the source datasets.\n* 'simplification': a simplification of the original obtained by an automated system or a human.\n* 'label': a meaning preservation rating between 0 and 100.",
"### Data Splits\n\n\nThe split statistics of CSMD are given below.\n\n\n\nAll the splits are randomly split using a 60-10-30 split with the seed '42'."
]
| [
140,
212,
59,
38
]
| [
"passage: TAGS\n#task_categories-text-classification #task_categories-text2text-generation #multilinguality-monolingual #multilinguality-aligned #size_categories-1K<n<10K #source_datasets-original #source_datasets-extended|other-turkcorpus,other-asset,other-questeval,other-simplicity_da,other-simp_da #language-English #license-cc-by-4.0 #simplification-evaluation #meaning-evaluation #arxiv-2104.07560 #region-us \n### Data Instances\n\n\n* 'Meaning' configuration: an instance consists of 1,355 meaning preservation triplets (Document, simplification, label).\n* 'meaning\\_with\\_data\\_augmentation' configuration: an instance consists of 1,355 meaning preservation triplets (Document, simplification, label) along with 1,355 data augmentation triplets (Document, Document, 1) and 1,355 data augmentation triplets (Document, Unrelated Document, 0) (See the sanity checks in our article).\n* 'meaning\\_holdout\\_identical' configuration: an instance consists of 359 meaning holdout preservation identical triplets (Document, Document, 1) based on the ASSET Simplification dataset.\n* 'meaning\\_holdout\\_unrelated' configuration: an instance consists of 359 meaning holdout preservation unrelated triplets (Document, Unrelated Document, 0) based on the ASSET Simplification dataset.### Data Fields\n\n\n* 'original': an original sentence from the source datasets.\n* 'simplification': a simplification of the original obtained by an automated system or a human.\n* 'label': a meaning preservation rating between 0 and 100.### Data Splits\n\n\nThe split statistics of CSMD are given below.\n\n\n\nAll the splits are randomly split using a 60-10-30 split with the seed '42'."
]
|
1ccdf977188f72d5377981b838652321fe613e2b |
---
**Dataset Summary:**
- **Name:** [Teknium DataForge Economics](https://huggingface.co/datasets/teknium/dataforge-economics)
- **Creator:** [Teknium](https://huggingface.co/teknium)
- **License:** MIT
- **Task Categories:** Conversational Analysis in Finance
- **Language:** English
- **Tags:** Finance, Economic Conversations
- **Size:** Less than 1,000 records
- **Format:** JSONL with standard Alpaca structure
- **Unique Features:**
- Fields: `instruction`, `input`, `output`, etc.
- Additional Fields: `origin` (dataset name), `conversation_id` (unique identifier for tracking conversations)
- **Additional Information:** For more detailed information about the dataset, visit [Teknium DataForge Economics on Hugging Face](https://huggingface.co/datasets/teknium/dataforge-economics).
Transform by [Convector](https://github.com/teilomillet/convector) with Love.
---
| Convector/dataforge-economics-standard | [
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:mit",
"finance",
"region:us"
]
| 2023-11-13T15:45:04+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["conversational"], "tags": ["finance"]} | 2023-11-13T16:16:17+00:00 | []
| [
"en"
]
| TAGS
#task_categories-conversational #size_categories-n<1K #language-English #license-mit #finance #region-us
|
---
Dataset Summary:
- Name: Teknium DataForge Economics
- Creator: Teknium
- License: MIT
- Task Categories: Conversational Analysis in Finance
- Language: English
- Tags: Finance, Economic Conversations
- Size: Less than 1,000 records
- Format: JSONL with standard Alpaca structure
- Unique Features:
- Fields: 'instruction', 'input', 'output', etc.
- Additional Fields: 'origin' (dataset name), 'conversation_id' (unique identifier for tracking conversations)
- Additional Information: For more detailed information about the dataset, visit Teknium DataForge Economics on Hugging Face.
Transform by Convector with Love.
---
| []
| [
"TAGS\n#task_categories-conversational #size_categories-n<1K #language-English #license-mit #finance #region-us \n"
]
| [
38
]
| [
"passage: TAGS\n#task_categories-conversational #size_categories-n<1K #language-English #license-mit #finance #region-us \n"
]
|
5d127115858881c6b781362de2eee2d0390ebbba |
# Dataset Card for Evaluation run of 42dot/42dot_LLM-PLM-1.3B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/42dot/42dot_LLM-PLM-1.3B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_42dot__42dot_LLM-PLM-1.3B_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-15T08:12:34.029868](https://huggingface.co/datasets/open-llm-leaderboard/details_42dot__42dot_LLM-PLM-1.3B_public/blob/main/results_2023-11-15T08-12-34.029868.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2748396034833794,
"acc_stderr": 0.03133274597965432,
"acc_norm": 0.2767290148254369,
"acc_norm_stderr": 0.032124763692846635,
"mc1": 0.23378212974296206,
"mc1_stderr": 0.014816195991931578,
"mc2": 0.38680931810418795,
"mc2_stderr": 0.013939564847231014,
"em": 0.001153523489932886,
"em_stderr": 0.0003476179896857114,
"f1": 0.04587562919463095,
"f1_stderr": 0.0011468980714363175
},
"harness|arc:challenge|25": {
"acc": 0.30119453924914674,
"acc_stderr": 0.013406741767847627,
"acc_norm": 0.3242320819112628,
"acc_norm_stderr": 0.01367881039951882
},
"harness|hellaswag|10": {
"acc": 0.4287990440151364,
"acc_stderr": 0.0049389301432344514,
"acc_norm": 0.563931487751444,
"acc_norm_stderr": 0.004948824501355477
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768081,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768081
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.23703703703703705,
"acc_stderr": 0.03673731683969506,
"acc_norm": 0.23703703703703705,
"acc_norm_stderr": 0.03673731683969506
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.18421052631578946,
"acc_stderr": 0.0315469804508223,
"acc_norm": 0.18421052631578946,
"acc_norm_stderr": 0.0315469804508223
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.22,
"acc_stderr": 0.041633319989322716,
"acc_norm": 0.22,
"acc_norm_stderr": 0.041633319989322716
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2679245283018868,
"acc_stderr": 0.027257260322494845,
"acc_norm": 0.2679245283018868,
"acc_norm_stderr": 0.027257260322494845
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2708333333333333,
"acc_stderr": 0.03716177437566016,
"acc_norm": 0.2708333333333333,
"acc_norm_stderr": 0.03716177437566016
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.27167630057803466,
"acc_stderr": 0.03391750322321659,
"acc_norm": 0.27167630057803466,
"acc_norm_stderr": 0.03391750322321659
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.17647058823529413,
"acc_stderr": 0.03793281185307809,
"acc_norm": 0.17647058823529413,
"acc_norm_stderr": 0.03793281185307809
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.31063829787234043,
"acc_stderr": 0.03025123757921317,
"acc_norm": 0.31063829787234043,
"acc_norm_stderr": 0.03025123757921317
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.04142439719489362,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.04142439719489362
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.18620689655172415,
"acc_stderr": 0.03243946159004616,
"acc_norm": 0.18620689655172415,
"acc_norm_stderr": 0.03243946159004616
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.0220190800122179,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.0220190800122179
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.24603174603174602,
"acc_stderr": 0.038522733649243156,
"acc_norm": 0.24603174603174602,
"acc_norm_stderr": 0.038522733649243156
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.22580645161290322,
"acc_stderr": 0.02378557788418101,
"acc_norm": 0.22580645161290322,
"acc_norm_stderr": 0.02378557788418101
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.270935960591133,
"acc_stderr": 0.031270907132976984,
"acc_norm": 0.270935960591133,
"acc_norm_stderr": 0.031270907132976984
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.02962022787479047,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.02962022787479047
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.3471502590673575,
"acc_stderr": 0.034356961683613546,
"acc_norm": 0.3471502590673575,
"acc_norm_stderr": 0.034356961683613546
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.34615384615384615,
"acc_stderr": 0.024121125416941183,
"acc_norm": 0.34615384615384615,
"acc_norm_stderr": 0.024121125416941183
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.027309140588230175,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.027309140588230175
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.29831932773109243,
"acc_stderr": 0.02971914287634286,
"acc_norm": 0.29831932773109243,
"acc_norm_stderr": 0.02971914287634286
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.26055045871559634,
"acc_stderr": 0.01881918203485007,
"acc_norm": 0.26055045871559634,
"acc_norm_stderr": 0.01881918203485007
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.47685185185185186,
"acc_stderr": 0.03406315360711507,
"acc_norm": 0.47685185185185186,
"acc_norm_stderr": 0.03406315360711507
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.0309645179269234,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.0309645179269234
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.23628691983122363,
"acc_stderr": 0.027652153144159263,
"acc_norm": 0.23628691983122363,
"acc_norm_stderr": 0.027652153144159263
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.25112107623318386,
"acc_stderr": 0.029105220833224622,
"acc_norm": 0.25112107623318386,
"acc_norm_stderr": 0.029105220833224622
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.20610687022900764,
"acc_stderr": 0.03547771004159463,
"acc_norm": 0.20610687022900764,
"acc_norm_stderr": 0.03547771004159463
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.3884297520661157,
"acc_stderr": 0.04449270350068382,
"acc_norm": 0.3884297520661157,
"acc_norm_stderr": 0.04449270350068382
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.2037037037037037,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.2037037037037037,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.26993865030674846,
"acc_stderr": 0.034878251684978906,
"acc_norm": 0.26993865030674846,
"acc_norm_stderr": 0.034878251684978906
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.33035714285714285,
"acc_stderr": 0.04464285714285713,
"acc_norm": 0.33035714285714285,
"acc_norm_stderr": 0.04464285714285713
},
"harness|hendrycksTest-management|5": {
"acc": 0.17475728155339806,
"acc_stderr": 0.037601780060266224,
"acc_norm": 0.17475728155339806,
"acc_norm_stderr": 0.037601780060266224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.28205128205128205,
"acc_stderr": 0.02948036054954119,
"acc_norm": 0.28205128205128205,
"acc_norm_stderr": 0.02948036054954119
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.23243933588761176,
"acc_stderr": 0.015104550008905706,
"acc_norm": 0.23243933588761176,
"acc_norm_stderr": 0.015104550008905706
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.2138728323699422,
"acc_stderr": 0.022075709251757177,
"acc_norm": 0.2138728323699422,
"acc_norm_stderr": 0.022075709251757177
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2681564245810056,
"acc_stderr": 0.014816119635317003,
"acc_norm": 0.2681564245810056,
"acc_norm_stderr": 0.014816119635317003
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.28431372549019607,
"acc_stderr": 0.025829163272757485,
"acc_norm": 0.28431372549019607,
"acc_norm_stderr": 0.025829163272757485
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.3183279742765273,
"acc_stderr": 0.02645722506781102,
"acc_norm": 0.3183279742765273,
"acc_norm_stderr": 0.02645722506781102
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.26851851851851855,
"acc_stderr": 0.024659685185967277,
"acc_norm": 0.26851851851851855,
"acc_norm_stderr": 0.024659685185967277
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2872340425531915,
"acc_stderr": 0.026992199173064356,
"acc_norm": 0.2872340425531915,
"acc_norm_stderr": 0.026992199173064356
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2392438070404172,
"acc_stderr": 0.010896123652676646,
"acc_norm": 0.2392438070404172,
"acc_norm_stderr": 0.010896123652676646
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.44485294117647056,
"acc_stderr": 0.030187532060329376,
"acc_norm": 0.44485294117647056,
"acc_norm_stderr": 0.030187532060329376
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2565359477124183,
"acc_stderr": 0.017667841612378984,
"acc_norm": 0.2565359477124183,
"acc_norm_stderr": 0.017667841612378984
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.24545454545454545,
"acc_stderr": 0.041220665028782855,
"acc_norm": 0.24545454545454545,
"acc_norm_stderr": 0.041220665028782855
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.2693877551020408,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.2693877551020408,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.23880597014925373,
"acc_stderr": 0.03014777593540922,
"acc_norm": 0.23880597014925373,
"acc_norm_stderr": 0.03014777593540922
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932269,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932269
},
"harness|hendrycksTest-virology|5": {
"acc": 0.2710843373493976,
"acc_stderr": 0.034605799075530255,
"acc_norm": 0.2710843373493976,
"acc_norm_stderr": 0.034605799075530255
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.03377310252209194,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.03377310252209194
},
"harness|truthfulqa:mc|0": {
"mc1": 0.23378212974296206,
"mc1_stderr": 0.014816195991931578,
"mc2": 0.38680931810418795,
"mc2_stderr": 0.013939564847231014
},
"harness|winogrande|5": {
"acc": 0.5887924230465666,
"acc_stderr": 0.013829128358676872
},
"harness|drop|3": {
"em": 0.001153523489932886,
"em_stderr": 0.0003476179896857114,
"f1": 0.04587562919463095,
"f1_stderr": 0.0011468980714363175
},
"harness|gsm8k|5": {
"acc": 0.0075815011372251705,
"acc_stderr": 0.002389281512077207
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_42dot__42dot_LLM-PLM-1.3B | [
"region:us"
]
| 2023-11-13T15:46:00+00:00 | {"pretty_name": "Evaluation run of 42dot/42dot_LLM-PLM-1.3B", "dataset_summary": "Dataset automatically created during the evaluation run of model [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_42dot__42dot_LLM-PLM-1.3B_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-15T08:12:34.029868](https://huggingface.co/datasets/open-llm-leaderboard/details_42dot__42dot_LLM-PLM-1.3B_public/blob/main/results_2023-11-15T08-12-34.029868.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2748396034833794,\n \"acc_stderr\": 0.03133274597965432,\n \"acc_norm\": 0.2767290148254369,\n \"acc_norm_stderr\": 0.032124763692846635,\n \"mc1\": 0.23378212974296206,\n \"mc1_stderr\": 0.014816195991931578,\n \"mc2\": 0.38680931810418795,\n \"mc2_stderr\": 0.013939564847231014,\n \"em\": 0.001153523489932886,\n \"em_stderr\": 0.0003476179896857114,\n \"f1\": 0.04587562919463095,\n \"f1_stderr\": 0.0011468980714363175\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.30119453924914674,\n \"acc_stderr\": 0.013406741767847627,\n \"acc_norm\": 0.3242320819112628,\n \"acc_norm_stderr\": 0.01367881039951882\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.4287990440151364,\n \"acc_stderr\": 0.0049389301432344514,\n \"acc_norm\": 0.563931487751444,\n \"acc_norm_stderr\": 0.004948824501355477\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768081,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768081\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.23703703703703705,\n \"acc_stderr\": 0.03673731683969506,\n \"acc_norm\": 0.23703703703703705,\n \"acc_norm_stderr\": 0.03673731683969506\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.18421052631578946,\n \"acc_stderr\": 0.0315469804508223,\n \"acc_norm\": 0.18421052631578946,\n \"acc_norm_stderr\": 0.0315469804508223\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.041633319989322716,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.041633319989322716\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.2679245283018868,\n \"acc_stderr\": 0.027257260322494845,\n \"acc_norm\": 0.2679245283018868,\n \"acc_norm_stderr\": 0.027257260322494845\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2708333333333333,\n \"acc_stderr\": 0.03716177437566016,\n \"acc_norm\": 0.2708333333333333,\n \"acc_norm_stderr\": 0.03716177437566016\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.27167630057803466,\n \"acc_stderr\": 0.03391750322321659,\n \"acc_norm\": 0.27167630057803466,\n \"acc_norm_stderr\": 0.03391750322321659\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.17647058823529413,\n \"acc_stderr\": 0.03793281185307809,\n \"acc_norm\": 0.17647058823529413,\n \"acc_norm_stderr\": 0.03793281185307809\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.31063829787234043,\n \"acc_stderr\": 0.03025123757921317,\n \"acc_norm\": 0.31063829787234043,\n \"acc_norm_stderr\": 0.03025123757921317\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.04142439719489362,\n \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.04142439719489362\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.18620689655172415,\n \"acc_stderr\": 0.03243946159004616,\n \"acc_norm\": 0.18620689655172415,\n \"acc_norm_stderr\": 0.03243946159004616\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.24074074074074073,\n \"acc_stderr\": 0.0220190800122179,\n \"acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.0220190800122179\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.24603174603174602,\n \"acc_stderr\": 0.038522733649243156,\n \"acc_norm\": 0.24603174603174602,\n \"acc_norm_stderr\": 0.038522733649243156\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.22580645161290322,\n \"acc_stderr\": 0.02378557788418101,\n \"acc_norm\": 0.22580645161290322,\n \"acc_norm_stderr\": 0.02378557788418101\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.270935960591133,\n \"acc_stderr\": 0.031270907132976984,\n \"acc_norm\": 0.270935960591133,\n \"acc_norm_stderr\": 0.031270907132976984\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.02962022787479047,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.02962022787479047\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.3471502590673575,\n \"acc_stderr\": 0.034356961683613546,\n \"acc_norm\": 0.3471502590673575,\n \"acc_norm_stderr\": 0.034356961683613546\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.34615384615384615,\n \"acc_stderr\": 0.024121125416941183,\n \"acc_norm\": 0.34615384615384615,\n \"acc_norm_stderr\": 0.024121125416941183\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.027309140588230175,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.027309140588230175\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.29831932773109243,\n \"acc_stderr\": 0.02971914287634286,\n \"acc_norm\": 0.29831932773109243,\n \"acc_norm_stderr\": 0.02971914287634286\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.26055045871559634,\n \"acc_stderr\": 0.01881918203485007,\n \"acc_norm\": 0.26055045871559634,\n \"acc_norm_stderr\": 0.01881918203485007\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.47685185185185186,\n \"acc_stderr\": 0.03406315360711507,\n \"acc_norm\": 0.47685185185185186,\n \"acc_norm_stderr\": 0.03406315360711507\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.0309645179269234,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.0309645179269234\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.23628691983122363,\n \"acc_stderr\": 0.027652153144159263,\n \"acc_norm\": 0.23628691983122363,\n \"acc_norm_stderr\": 0.027652153144159263\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.25112107623318386,\n \"acc_stderr\": 0.029105220833224622,\n \"acc_norm\": 0.25112107623318386,\n \"acc_norm_stderr\": 0.029105220833224622\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.20610687022900764,\n \"acc_stderr\": 0.03547771004159463,\n \"acc_norm\": 0.20610687022900764,\n \"acc_norm_stderr\": 0.03547771004159463\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.3884297520661157,\n \"acc_stderr\": 0.04449270350068382,\n \"acc_norm\": 0.3884297520661157,\n \"acc_norm_stderr\": 0.04449270350068382\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.2037037037037037,\n \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.2037037037037037,\n \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.26993865030674846,\n \"acc_stderr\": 0.034878251684978906,\n \"acc_norm\": 0.26993865030674846,\n \"acc_norm_stderr\": 0.034878251684978906\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.33035714285714285,\n \"acc_stderr\": 0.04464285714285713,\n \"acc_norm\": 0.33035714285714285,\n \"acc_norm_stderr\": 0.04464285714285713\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.28205128205128205,\n \"acc_stderr\": 0.02948036054954119,\n \"acc_norm\": 0.28205128205128205,\n \"acc_norm_stderr\": 0.02948036054954119\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23243933588761176,\n \"acc_stderr\": 0.015104550008905706,\n \"acc_norm\": 0.23243933588761176,\n \"acc_norm_stderr\": 0.015104550008905706\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.2138728323699422,\n \"acc_stderr\": 0.022075709251757177,\n \"acc_norm\": 0.2138728323699422,\n \"acc_norm_stderr\": 0.022075709251757177\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2681564245810056,\n \"acc_stderr\": 0.014816119635317003,\n \"acc_norm\": 0.2681564245810056,\n \"acc_norm_stderr\": 0.014816119635317003\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.28431372549019607,\n \"acc_stderr\": 0.025829163272757485,\n \"acc_norm\": 0.28431372549019607,\n \"acc_norm_stderr\": 0.025829163272757485\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.3183279742765273,\n \"acc_stderr\": 0.02645722506781102,\n \"acc_norm\": 0.3183279742765273,\n \"acc_norm_stderr\": 0.02645722506781102\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.26851851851851855,\n \"acc_stderr\": 0.024659685185967277,\n \"acc_norm\": 0.26851851851851855,\n \"acc_norm_stderr\": 0.024659685185967277\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2872340425531915,\n \"acc_stderr\": 0.026992199173064356,\n \"acc_norm\": 0.2872340425531915,\n \"acc_norm_stderr\": 0.026992199173064356\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2392438070404172,\n \"acc_stderr\": 0.010896123652676646,\n \"acc_norm\": 0.2392438070404172,\n \"acc_norm_stderr\": 0.010896123652676646\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.44485294117647056,\n \"acc_stderr\": 0.030187532060329376,\n \"acc_norm\": 0.44485294117647056,\n \"acc_norm_stderr\": 0.030187532060329376\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.2565359477124183,\n \"acc_stderr\": 0.017667841612378984,\n \"acc_norm\": 0.2565359477124183,\n \"acc_norm_stderr\": 0.017667841612378984\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.24545454545454545,\n \"acc_stderr\": 0.041220665028782855,\n \"acc_norm\": 0.24545454545454545,\n \"acc_norm_stderr\": 0.041220665028782855\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.2693877551020408,\n \"acc_stderr\": 0.02840125202902294,\n \"acc_norm\": 0.2693877551020408,\n \"acc_norm_stderr\": 0.02840125202902294\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.23880597014925373,\n \"acc_stderr\": 0.03014777593540922,\n \"acc_norm\": 0.23880597014925373,\n \"acc_norm_stderr\": 0.03014777593540922\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932269,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932269\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2710843373493976,\n \"acc_stderr\": 0.034605799075530255,\n \"acc_norm\": 0.2710843373493976,\n \"acc_norm_stderr\": 0.034605799075530255\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.03377310252209194,\n \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.03377310252209194\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.23378212974296206,\n \"mc1_stderr\": 0.014816195991931578,\n \"mc2\": 0.38680931810418795,\n \"mc2_stderr\": 0.013939564847231014\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5887924230465666,\n \"acc_stderr\": 0.013829128358676872\n },\n \"harness|drop|3\": {\n \"em\": 0.001153523489932886,\n \"em_stderr\": 0.0003476179896857114,\n \"f1\": 0.04587562919463095,\n \"f1_stderr\": 0.0011468980714363175\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0075815011372251705,\n \"acc_stderr\": 0.002389281512077207\n }\n}\n```", "repo_url": "https://huggingface.co/42dot/42dot_LLM-PLM-1.3B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|arc:challenge|25_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|drop|3_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|drop|3_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|gsm8k|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hellaswag|10_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-43-12.146243.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-15T08-12-34.029868.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["**/details_harness|winogrande|5_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["**/details_harness|winogrande|5_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-15T08-12-34.029868.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_13T15_43_12.146243", "path": ["results_2023-11-13T15-43-12.146243.parquet"]}, {"split": "2023_11_15T08_12_34.029868", "path": ["results_2023-11-15T08-12-34.029868.parquet"]}, {"split": "latest", "path": ["results_2023-11-15T08-12-34.029868.parquet"]}]}]} | 2023-11-15T08:15:30+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of 42dot/42dot_LLM-PLM-1.3B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model 42dot/42dot_LLM-PLM-1.3B on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-15T08:12:34.029868(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of 42dot/42dot_LLM-PLM-1.3B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model 42dot/42dot_LLM-PLM-1.3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-15T08:12:34.029868(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of 42dot/42dot_LLM-PLM-1.3B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model 42dot/42dot_LLM-PLM-1.3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-15T08:12:34.029868(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
23,
31,
172,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of 42dot/42dot_LLM-PLM-1.3B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model 42dot/42dot_LLM-PLM-1.3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-15T08:12:34.029868(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
3650459ef9461ce87f93921ed201af1334d5af87 |
# Dataset Card for Evaluation run of uukuguy/Mistral-7B-OpenOrca-lora
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/uukuguy/Mistral-7B-OpenOrca-lora
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [uukuguy/Mistral-7B-OpenOrca-lora](https://huggingface.co/uukuguy/Mistral-7B-OpenOrca-lora) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_uukuguy__Mistral-7B-OpenOrca-lora_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-13T15:44:18.785582](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__Mistral-7B-OpenOrca-lora_public/blob/main/results_2023-11-13T15-44-18.785582.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6351832920969729,
"acc_stderr": 0.03210898212657927,
"acc_norm": 0.6445450507876114,
"acc_norm_stderr": 0.03280393070910138,
"mc1": 0.2839657282741738,
"mc1_stderr": 0.015785370858396725,
"mc2": 0.4274271734982197,
"mc2_stderr": 0.014247308828610854,
"em": 0.0019924496644295304,
"em_stderr": 0.00045666764626669387,
"f1": 0.06191694630872485,
"f1_stderr": 0.0013823026381279647
},
"harness|arc:challenge|25": {
"acc": 0.5742320819112628,
"acc_stderr": 0.014449464278868807,
"acc_norm": 0.6194539249146758,
"acc_norm_stderr": 0.014188277712349814
},
"harness|hellaswag|10": {
"acc": 0.6357299342760406,
"acc_stderr": 0.004802413919932666,
"acc_norm": 0.8361880103565027,
"acc_norm_stderr": 0.003693484894179416
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6644736842105263,
"acc_stderr": 0.03842498559395268,
"acc_norm": 0.6644736842105263,
"acc_norm_stderr": 0.03842498559395268
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6792452830188679,
"acc_stderr": 0.028727502957880267,
"acc_norm": 0.6792452830188679,
"acc_norm_stderr": 0.028727502957880267
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7361111111111112,
"acc_stderr": 0.03685651095897532,
"acc_norm": 0.7361111111111112,
"acc_norm_stderr": 0.03685651095897532
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6358381502890174,
"acc_stderr": 0.03669072477416907,
"acc_norm": 0.6358381502890174,
"acc_norm_stderr": 0.03669072477416907
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5787234042553191,
"acc_stderr": 0.03227834510146268,
"acc_norm": 0.5787234042553191,
"acc_norm_stderr": 0.03227834510146268
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5087719298245614,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.5087719298245614,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.0411391498118926,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.0411391498118926
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.373015873015873,
"acc_stderr": 0.02490699045899257,
"acc_norm": 0.373015873015873,
"acc_norm_stderr": 0.02490699045899257
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.04390259265377562,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.04390259265377562
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7580645161290323,
"acc_stderr": 0.024362599693031096,
"acc_norm": 0.7580645161290323,
"acc_norm_stderr": 0.024362599693031096
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5270935960591133,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.5270935960591133,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7626262626262627,
"acc_stderr": 0.0303137105381989,
"acc_norm": 0.7626262626262627,
"acc_norm_stderr": 0.0303137105381989
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8860103626943006,
"acc_stderr": 0.022935144053919443,
"acc_norm": 0.8860103626943006,
"acc_norm_stderr": 0.022935144053919443
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.023901157979402534,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.023901157979402534
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34814814814814815,
"acc_stderr": 0.029045600290616258,
"acc_norm": 0.34814814814814815,
"acc_norm_stderr": 0.029045600290616258
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.03068473711513536,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.03068473711513536
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8220183486238533,
"acc_stderr": 0.016399436366612927,
"acc_norm": 0.8220183486238533,
"acc_norm_stderr": 0.016399436366612927
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5509259259259259,
"acc_stderr": 0.033922384053216174,
"acc_norm": 0.5509259259259259,
"acc_norm_stderr": 0.033922384053216174
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.803921568627451,
"acc_stderr": 0.027865942286639318,
"acc_norm": 0.803921568627451,
"acc_norm_stderr": 0.027865942286639318
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7637130801687764,
"acc_stderr": 0.027652153144159263,
"acc_norm": 0.7637130801687764,
"acc_norm_stderr": 0.027652153144159263
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477515,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477515
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7975460122699386,
"acc_stderr": 0.03157065078911901,
"acc_norm": 0.7975460122699386,
"acc_norm_stderr": 0.03157065078911901
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.49107142857142855,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.49107142857142855,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.8058252427184466,
"acc_stderr": 0.03916667762822585,
"acc_norm": 0.8058252427184466,
"acc_norm_stderr": 0.03916667762822585
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.020588491316092375,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.020588491316092375
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8109833971902938,
"acc_stderr": 0.014000791294407003,
"acc_norm": 0.8109833971902938,
"acc_norm_stderr": 0.014000791294407003
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7196531791907514,
"acc_stderr": 0.02418242749657761,
"acc_norm": 0.7196531791907514,
"acc_norm_stderr": 0.02418242749657761
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3318435754189944,
"acc_stderr": 0.015748421208187306,
"acc_norm": 0.3318435754189944,
"acc_norm_stderr": 0.015748421208187306
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7516339869281046,
"acc_stderr": 0.02473998135511359,
"acc_norm": 0.7516339869281046,
"acc_norm_stderr": 0.02473998135511359
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.025839898334877983,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.025839898334877983
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7283950617283951,
"acc_stderr": 0.024748624490537375,
"acc_norm": 0.7283950617283951,
"acc_norm_stderr": 0.024748624490537375
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.45241199478487615,
"acc_stderr": 0.012712265105889133,
"acc_norm": 0.45241199478487615,
"acc_norm_stderr": 0.012712265105889133
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.028418208619406755,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.028418208619406755
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6781045751633987,
"acc_stderr": 0.018901015322093085,
"acc_norm": 0.6781045751633987,
"acc_norm_stderr": 0.018901015322093085
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.028263889943784593,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.028263889943784593
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616913,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616913
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.038695433234721015,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.038695433234721015
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2839657282741738,
"mc1_stderr": 0.015785370858396725,
"mc2": 0.4274271734982197,
"mc2_stderr": 0.014247308828610854
},
"harness|winogrande|5": {
"acc": 0.7908445146014207,
"acc_stderr": 0.011430450045881575
},
"harness|drop|3": {
"em": 0.0019924496644295304,
"em_stderr": 0.00045666764626669387,
"f1": 0.06191694630872485,
"f1_stderr": 0.0013823026381279647
},
"harness|gsm8k|5": {
"acc": 0.1728582259287339,
"acc_stderr": 0.010415432246200585
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_uukuguy__Mistral-7B-OpenOrca-lora | [
"region:us"
]
| 2023-11-13T15:47:20+00:00 | {"pretty_name": "Evaluation run of uukuguy/Mistral-7B-OpenOrca-lora", "dataset_summary": "Dataset automatically created during the evaluation run of model [uukuguy/Mistral-7B-OpenOrca-lora](https://huggingface.co/uukuguy/Mistral-7B-OpenOrca-lora) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__Mistral-7B-OpenOrca-lora_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-13T15:44:18.785582](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__Mistral-7B-OpenOrca-lora_public/blob/main/results_2023-11-13T15-44-18.785582.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6351832920969729,\n \"acc_stderr\": 0.03210898212657927,\n \"acc_norm\": 0.6445450507876114,\n \"acc_norm_stderr\": 0.03280393070910138,\n \"mc1\": 0.2839657282741738,\n \"mc1_stderr\": 0.015785370858396725,\n \"mc2\": 0.4274271734982197,\n \"mc2_stderr\": 0.014247308828610854,\n \"em\": 0.0019924496644295304,\n \"em_stderr\": 0.00045666764626669387,\n \"f1\": 0.06191694630872485,\n \"f1_stderr\": 0.0013823026381279647\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5742320819112628,\n \"acc_stderr\": 0.014449464278868807,\n \"acc_norm\": 0.6194539249146758,\n \"acc_norm_stderr\": 0.014188277712349814\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6357299342760406,\n \"acc_stderr\": 0.004802413919932666,\n \"acc_norm\": 0.8361880103565027,\n \"acc_norm_stderr\": 0.003693484894179416\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6644736842105263,\n \"acc_stderr\": 0.03842498559395268,\n \"acc_norm\": 0.6644736842105263,\n \"acc_norm_stderr\": 0.03842498559395268\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6792452830188679,\n \"acc_stderr\": 0.028727502957880267,\n \"acc_norm\": 0.6792452830188679,\n \"acc_norm_stderr\": 0.028727502957880267\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7361111111111112,\n \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.7361111111111112,\n \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6358381502890174,\n \"acc_stderr\": 0.03669072477416907,\n \"acc_norm\": 0.6358381502890174,\n \"acc_norm_stderr\": 0.03669072477416907\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146268,\n \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146268\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.373015873015873,\n \"acc_stderr\": 0.02490699045899257,\n \"acc_norm\": 0.373015873015873,\n \"acc_norm_stderr\": 0.02490699045899257\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.04390259265377562,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.04390259265377562\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7580645161290323,\n \"acc_stderr\": 0.024362599693031096,\n \"acc_norm\": 0.7580645161290323,\n \"acc_norm_stderr\": 0.024362599693031096\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5270935960591133,\n \"acc_stderr\": 0.03512819077876106,\n \"acc_norm\": 0.5270935960591133,\n \"acc_norm_stderr\": 0.03512819077876106\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7626262626262627,\n \"acc_stderr\": 0.0303137105381989,\n \"acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.0303137105381989\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919443,\n \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919443\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402534,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402534\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34814814814814815,\n \"acc_stderr\": 0.029045600290616258,\n \"acc_norm\": 0.34814814814814815,\n \"acc_norm_stderr\": 0.029045600290616258\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.03068473711513536,\n \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.03068473711513536\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8220183486238533,\n \"acc_stderr\": 0.016399436366612927,\n \"acc_norm\": 0.8220183486238533,\n \"acc_norm_stderr\": 0.016399436366612927\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5509259259259259,\n \"acc_stderr\": 0.033922384053216174,\n \"acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.033922384053216174\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.803921568627451,\n \"acc_stderr\": 0.027865942286639318,\n \"acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.027865942286639318\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7637130801687764,\n \"acc_stderr\": 0.027652153144159263,\n \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.027652153144159263\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7975460122699386,\n \"acc_stderr\": 0.03157065078911901,\n \"acc_norm\": 0.7975460122699386,\n \"acc_norm_stderr\": 0.03157065078911901\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822585,\n \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822585\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.020588491316092375,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.020588491316092375\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8109833971902938,\n \"acc_stderr\": 0.014000791294407003,\n \"acc_norm\": 0.8109833971902938,\n \"acc_norm_stderr\": 0.014000791294407003\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7196531791907514,\n \"acc_stderr\": 0.02418242749657761,\n \"acc_norm\": 0.7196531791907514,\n \"acc_norm_stderr\": 0.02418242749657761\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3318435754189944,\n \"acc_stderr\": 0.015748421208187306,\n \"acc_norm\": 0.3318435754189944,\n \"acc_norm_stderr\": 0.015748421208187306\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7516339869281046,\n \"acc_stderr\": 0.02473998135511359,\n \"acc_norm\": 0.7516339869281046,\n \"acc_norm_stderr\": 0.02473998135511359\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n \"acc_stderr\": 0.025839898334877983,\n \"acc_norm\": 0.707395498392283,\n \"acc_norm_stderr\": 0.025839898334877983\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7283950617283951,\n \"acc_stderr\": 0.024748624490537375,\n \"acc_norm\": 0.7283950617283951,\n \"acc_norm_stderr\": 0.024748624490537375\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.45241199478487615,\n \"acc_stderr\": 0.012712265105889133,\n \"acc_norm\": 0.45241199478487615,\n \"acc_norm_stderr\": 0.012712265105889133\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.028418208619406755,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.028418208619406755\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6781045751633987,\n \"acc_stderr\": 0.018901015322093085,\n \"acc_norm\": 0.6781045751633987,\n \"acc_norm_stderr\": 0.018901015322093085\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784593,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784593\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616913,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616913\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n \"acc_stderr\": 0.038695433234721015,\n \"acc_norm\": 0.5542168674698795,\n \"acc_norm_stderr\": 0.038695433234721015\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2839657282741738,\n \"mc1_stderr\": 0.015785370858396725,\n \"mc2\": 0.4274271734982197,\n \"mc2_stderr\": 0.014247308828610854\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7908445146014207,\n \"acc_stderr\": 0.011430450045881575\n },\n \"harness|drop|3\": {\n \"em\": 0.0019924496644295304,\n \"em_stderr\": 0.00045666764626669387,\n \"f1\": 0.06191694630872485,\n \"f1_stderr\": 0.0013823026381279647\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1728582259287339,\n \"acc_stderr\": 0.010415432246200585\n }\n}\n```", "repo_url": "https://huggingface.co/uukuguy/Mistral-7B-OpenOrca-lora", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|drop|3_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-44-18.785582.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["**/details_harness|winogrande|5_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-13T15-44-18.785582.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_13T15_44_18.785582", "path": ["results_2023-11-13T15-44-18.785582.parquet"]}, {"split": "latest", "path": ["results_2023-11-13T15-44-18.785582.parquet"]}]}]} | 2023-11-13T15:48:06+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of uukuguy/Mistral-7B-OpenOrca-lora
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model uukuguy/Mistral-7B-OpenOrca-lora on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-13T15:44:18.785582(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of uukuguy/Mistral-7B-OpenOrca-lora",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/Mistral-7B-OpenOrca-lora on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T15:44:18.785582(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of uukuguy/Mistral-7B-OpenOrca-lora",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/Mistral-7B-OpenOrca-lora on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T15:44:18.785582(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
25,
31,
174,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of uukuguy/Mistral-7B-OpenOrca-lora## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/Mistral-7B-OpenOrca-lora on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-13T15:44:18.785582(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
d1ea0d92747746683081b8a543e351688051bac9 |
---
**Dataset Summary:**
- **Name:** [Teknium DataForge Economics](https://huggingface.co/datasets/teknium/dataforge-economics)
- **Creator:** [Teknium](https://huggingface.co/teknium)
- **License:** MIT
- **Task Categories:** Conversational Analysis in Finance
- **Language:** English
- **Tags:** Finance, Economic Conversations
- **Size:** Less than 1,000 records
- **Format:** JSONL with chat completion OpenAI structure
- **Unique Features:**
- Fields: `"messages": [
{"role": "system", "content": ""},
{"role": "user", "content": ""},
{"role": "assistant", "content": ""}
],
"":""`
- Additional Fields: `origin` (dataset name), `conversation_id` (unique identifier for tracking conversations)
- **Additional Information:** For more detailed information about the dataset, visit [Teknium DataForge Economics on Hugging Face](https://huggingface.co/datasets/teknium/dataforge-economics).
Transform by [Convector](https://github.com/teilomillet/convector) with Love.
---
| Convector/dataforge-economics-CC | [
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:mit",
"finance",
"region:us"
]
| 2023-11-13T15:49:39+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["conversational"], "tags": ["finance"]} | 2023-11-13T16:18:47+00:00 | []
| [
"en"
]
| TAGS
#task_categories-conversational #size_categories-n<1K #language-English #license-mit #finance #region-us
|
---
Dataset Summary:
- Name: Teknium DataForge Economics
- Creator: Teknium
- License: MIT
- Task Categories: Conversational Analysis in Finance
- Language: English
- Tags: Finance, Economic Conversations
- Size: Less than 1,000 records
- Format: JSONL with chat completion OpenAI structure
- Unique Features:
- Fields: '"messages": [
{"role": "system", "content": ""},
{"role": "user", "content": ""},
{"role": "assistant", "content": ""}
],
"":""'
- Additional Fields: 'origin' (dataset name), 'conversation_id' (unique identifier for tracking conversations)
- Additional Information: For more detailed information about the dataset, visit Teknium DataForge Economics on Hugging Face.
Transform by Convector with Love.
---
| []
| [
"TAGS\n#task_categories-conversational #size_categories-n<1K #language-English #license-mit #finance #region-us \n"
]
| [
38
]
| [
"passage: TAGS\n#task_categories-conversational #size_categories-n<1K #language-English #license-mit #finance #region-us \n"
]
|
d7c383881ce5ea22c9918050748e382cd2d5f19c |
# Dataset Card for Evaluation run of 42dot/42dot_LLM-SFT-1.3B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/42dot/42dot_LLM-SFT-1.3B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [42dot/42dot_LLM-SFT-1.3B](https://huggingface.co/42dot/42dot_LLM-SFT-1.3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_42dot__42dot_LLM-SFT-1.3B_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-13T15:47:16.910477](https://huggingface.co/datasets/open-llm-leaderboard/details_42dot__42dot_LLM-SFT-1.3B_public/blob/main/results_2023-11-13T15-47-16.910477.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.26083934068438247,
"acc_stderr": 0.03100224322901986,
"acc_norm": 0.262585495126005,
"acc_norm_stderr": 0.031783041105593664,
"mc1": 0.2386780905752754,
"mc1_stderr": 0.014922629695456418,
"mc2": 0.39979776889587376,
"mc2_stderr": 0.014420445519552157,
"em": 0.01583473154362416,
"em_stderr": 0.0012784360866061313,
"f1": 0.07108431208053706,
"f1_stderr": 0.0017891407240589372
},
"harness|arc:challenge|25": {
"acc": 0.3361774744027304,
"acc_stderr": 0.013804855026205758,
"acc_norm": 0.3609215017064846,
"acc_norm_stderr": 0.01403476138617546
},
"harness|hellaswag|10": {
"acc": 0.44214299940250945,
"acc_stderr": 0.004956262919324398,
"acc_norm": 0.5896235809599681,
"acc_norm_stderr": 0.004908967278222482
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.18,
"acc_stderr": 0.03861229196653697,
"acc_norm": 0.18,
"acc_norm_stderr": 0.03861229196653697
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.2074074074074074,
"acc_stderr": 0.035025531706783165,
"acc_norm": 0.2074074074074074,
"acc_norm_stderr": 0.035025531706783165
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.21052631578947367,
"acc_stderr": 0.03317672787533157,
"acc_norm": 0.21052631578947367,
"acc_norm_stderr": 0.03317672787533157
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.21132075471698114,
"acc_stderr": 0.025125766484827845,
"acc_norm": 0.21132075471698114,
"acc_norm_stderr": 0.025125766484827845
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.25,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816508,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816508
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847415,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847415
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.26011560693641617,
"acc_stderr": 0.033450369167889904,
"acc_norm": 0.26011560693641617,
"acc_norm_stderr": 0.033450369167889904
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237654,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2936170212765957,
"acc_stderr": 0.029771642712491223,
"acc_norm": 0.2936170212765957,
"acc_norm_stderr": 0.029771642712491223
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2894736842105263,
"acc_stderr": 0.04266339443159394,
"acc_norm": 0.2894736842105263,
"acc_norm_stderr": 0.04266339443159394
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.25517241379310346,
"acc_stderr": 0.03632984052707842,
"acc_norm": 0.25517241379310346,
"acc_norm_stderr": 0.03632984052707842
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.23015873015873015,
"acc_stderr": 0.02167921966369315,
"acc_norm": 0.23015873015873015,
"acc_norm_stderr": 0.02167921966369315
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2857142857142857,
"acc_stderr": 0.0404061017820884,
"acc_norm": 0.2857142857142857,
"acc_norm_stderr": 0.0404061017820884
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.18064516129032257,
"acc_stderr": 0.02188617856717255,
"acc_norm": 0.18064516129032257,
"acc_norm_stderr": 0.02188617856717255
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.23645320197044334,
"acc_stderr": 0.02989611429173354,
"acc_norm": 0.23645320197044334,
"acc_norm_stderr": 0.02989611429173354
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.22,
"acc_stderr": 0.0416333199893227,
"acc_norm": 0.22,
"acc_norm_stderr": 0.0416333199893227
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.24848484848484848,
"acc_stderr": 0.03374402644139404,
"acc_norm": 0.24848484848484848,
"acc_norm_stderr": 0.03374402644139404
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.18181818181818182,
"acc_stderr": 0.027479603010538797,
"acc_norm": 0.18181818181818182,
"acc_norm_stderr": 0.027479603010538797
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.23316062176165803,
"acc_stderr": 0.030516111371476008,
"acc_norm": 0.23316062176165803,
"acc_norm_stderr": 0.030516111371476008
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.24615384615384617,
"acc_stderr": 0.021840866990423084,
"acc_norm": 0.24615384615384617,
"acc_norm_stderr": 0.021840866990423084
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.24444444444444444,
"acc_stderr": 0.02620276653465215,
"acc_norm": 0.24444444444444444,
"acc_norm_stderr": 0.02620276653465215
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.23529411764705882,
"acc_stderr": 0.02755361446786382,
"acc_norm": 0.23529411764705882,
"acc_norm_stderr": 0.02755361446786382
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.37748344370860926,
"acc_stderr": 0.0395802723112157,
"acc_norm": 0.37748344370860926,
"acc_norm_stderr": 0.0395802723112157
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.20917431192660552,
"acc_stderr": 0.017437937173343226,
"acc_norm": 0.20917431192660552,
"acc_norm_stderr": 0.017437937173343226
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.3148148148148148,
"acc_stderr": 0.0316746870682898,
"acc_norm": 0.3148148148148148,
"acc_norm_stderr": 0.0316746870682898
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.2107843137254902,
"acc_stderr": 0.028626547912437388,
"acc_norm": 0.2107843137254902,
"acc_norm_stderr": 0.028626547912437388
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2911392405063291,
"acc_stderr": 0.02957160106575337,
"acc_norm": 0.2911392405063291,
"acc_norm_stderr": 0.02957160106575337
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.28699551569506726,
"acc_stderr": 0.030360379710291954,
"acc_norm": 0.28699551569506726,
"acc_norm_stderr": 0.030360379710291954
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.24427480916030533,
"acc_stderr": 0.037683359597287434,
"acc_norm": 0.24427480916030533,
"acc_norm_stderr": 0.037683359597287434
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.256198347107438,
"acc_stderr": 0.03984979653302871,
"acc_norm": 0.256198347107438,
"acc_norm_stderr": 0.03984979653302871
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.22085889570552147,
"acc_stderr": 0.032591773927421776,
"acc_norm": 0.22085889570552147,
"acc_norm_stderr": 0.032591773927421776
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.32142857142857145,
"acc_stderr": 0.04432804055291519,
"acc_norm": 0.32142857142857145,
"acc_norm_stderr": 0.04432804055291519
},
"harness|hendrycksTest-management|5": {
"acc": 0.17475728155339806,
"acc_stderr": 0.037601780060266224,
"acc_norm": 0.17475728155339806,
"acc_norm_stderr": 0.037601780060266224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2948717948717949,
"acc_stderr": 0.029872577708891148,
"acc_norm": 0.2948717948717949,
"acc_norm_stderr": 0.029872577708891148
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.23499361430395913,
"acc_stderr": 0.015162024152278434,
"acc_norm": 0.23499361430395913,
"acc_norm_stderr": 0.015162024152278434
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.2514450867052023,
"acc_stderr": 0.023357365785874037,
"acc_norm": 0.2514450867052023,
"acc_norm_stderr": 0.023357365785874037
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.27262569832402234,
"acc_stderr": 0.014893391735249588,
"acc_norm": 0.27262569832402234,
"acc_norm_stderr": 0.014893391735249588
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.23529411764705882,
"acc_stderr": 0.024288619466046112,
"acc_norm": 0.23529411764705882,
"acc_norm_stderr": 0.024288619466046112
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2733118971061093,
"acc_stderr": 0.025311765975426122,
"acc_norm": 0.2733118971061093,
"acc_norm_stderr": 0.025311765975426122
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2345679012345679,
"acc_stderr": 0.023576881744005723,
"acc_norm": 0.2345679012345679,
"acc_norm_stderr": 0.023576881744005723
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2198581560283688,
"acc_stderr": 0.02470614107070547,
"acc_norm": 0.2198581560283688,
"acc_norm_stderr": 0.02470614107070547
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.24771838331160365,
"acc_stderr": 0.01102549929144374,
"acc_norm": 0.24771838331160365,
"acc_norm_stderr": 0.01102549929144374
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.29044117647058826,
"acc_stderr": 0.027576468622740512,
"acc_norm": 0.29044117647058826,
"acc_norm_stderr": 0.027576468622740512
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2630718954248366,
"acc_stderr": 0.017812676542320657,
"acc_norm": 0.2630718954248366,
"acc_norm_stderr": 0.017812676542320657
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.22727272727272727,
"acc_stderr": 0.04013964554072775,
"acc_norm": 0.22727272727272727,
"acc_norm_stderr": 0.04013964554072775
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.20816326530612245,
"acc_stderr": 0.025991117672813292,
"acc_norm": 0.20816326530612245,
"acc_norm_stderr": 0.025991117672813292
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24378109452736318,
"acc_stderr": 0.030360490154014652,
"acc_norm": 0.24378109452736318,
"acc_norm_stderr": 0.030360490154014652
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-virology|5": {
"acc": 0.2710843373493976,
"acc_stderr": 0.034605799075530276,
"acc_norm": 0.2710843373493976,
"acc_norm_stderr": 0.034605799075530276
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.30994152046783624,
"acc_stderr": 0.03546976959393163,
"acc_norm": 0.30994152046783624,
"acc_norm_stderr": 0.03546976959393163
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2386780905752754,
"mc1_stderr": 0.014922629695456418,
"mc2": 0.39979776889587376,
"mc2_stderr": 0.014420445519552157
},
"harness|winogrande|5": {
"acc": 0.5840568271507498,
"acc_stderr": 0.013852485356798255
},
"harness|drop|3": {
"em": 0.01583473154362416,
"em_stderr": 0.0012784360866061313,
"f1": 0.07108431208053706,
"f1_stderr": 0.0017891407240589372
},
"harness|gsm8k|5": {
"acc": 0.006823351023502654,
"acc_stderr": 0.0022675371022544996
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_42dot__42dot_LLM-SFT-1.3B | [
"region:us"
]
| 2023-11-13T15:50:04+00:00 | {"pretty_name": "Evaluation run of 42dot/42dot_LLM-SFT-1.3B", "dataset_summary": "Dataset automatically created during the evaluation run of model [42dot/42dot_LLM-SFT-1.3B](https://huggingface.co/42dot/42dot_LLM-SFT-1.3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_42dot__42dot_LLM-SFT-1.3B_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-13T15:47:16.910477](https://huggingface.co/datasets/open-llm-leaderboard/details_42dot__42dot_LLM-SFT-1.3B_public/blob/main/results_2023-11-13T15-47-16.910477.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.26083934068438247,\n \"acc_stderr\": 0.03100224322901986,\n \"acc_norm\": 0.262585495126005,\n \"acc_norm_stderr\": 0.031783041105593664,\n \"mc1\": 0.2386780905752754,\n \"mc1_stderr\": 0.014922629695456418,\n \"mc2\": 0.39979776889587376,\n \"mc2_stderr\": 0.014420445519552157,\n \"em\": 0.01583473154362416,\n \"em_stderr\": 0.0012784360866061313,\n \"f1\": 0.07108431208053706,\n \"f1_stderr\": 0.0017891407240589372\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.3361774744027304,\n \"acc_stderr\": 0.013804855026205758,\n \"acc_norm\": 0.3609215017064846,\n \"acc_norm_stderr\": 0.01403476138617546\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.44214299940250945,\n \"acc_stderr\": 0.004956262919324398,\n \"acc_norm\": 0.5896235809599681,\n \"acc_norm_stderr\": 0.004908967278222482\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.03861229196653697,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.03861229196653697\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2074074074074074,\n \"acc_stderr\": 0.035025531706783165,\n \"acc_norm\": 0.2074074074074074,\n \"acc_norm_stderr\": 0.035025531706783165\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.21052631578947367,\n \"acc_stderr\": 0.03317672787533157,\n \"acc_norm\": 0.21052631578947367,\n \"acc_norm_stderr\": 0.03317672787533157\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.21132075471698114,\n \"acc_stderr\": 0.025125766484827845,\n \"acc_norm\": 0.21132075471698114,\n \"acc_norm_stderr\": 0.025125766484827845\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816508,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816508\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847415,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847415\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.26011560693641617,\n \"acc_stderr\": 0.033450369167889904,\n \"acc_norm\": 0.26011560693641617,\n \"acc_norm_stderr\": 0.033450369167889904\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.2936170212765957,\n \"acc_stderr\": 0.029771642712491223,\n \"acc_norm\": 0.2936170212765957,\n \"acc_norm_stderr\": 0.029771642712491223\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2894736842105263,\n \"acc_stderr\": 0.04266339443159394,\n \"acc_norm\": 0.2894736842105263,\n \"acc_norm_stderr\": 0.04266339443159394\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.25517241379310346,\n \"acc_stderr\": 0.03632984052707842,\n \"acc_norm\": 0.25517241379310346,\n \"acc_norm_stderr\": 0.03632984052707842\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.23015873015873015,\n \"acc_stderr\": 0.02167921966369315,\n \"acc_norm\": 0.23015873015873015,\n \"acc_norm_stderr\": 0.02167921966369315\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2857142857142857,\n \"acc_stderr\": 0.0404061017820884,\n \"acc_norm\": 0.2857142857142857,\n \"acc_norm_stderr\": 0.0404061017820884\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.18064516129032257,\n \"acc_stderr\": 0.02188617856717255,\n \"acc_norm\": 0.18064516129032257,\n \"acc_norm_stderr\": 0.02188617856717255\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.23645320197044334,\n \"acc_stderr\": 0.02989611429173354,\n \"acc_norm\": 0.23645320197044334,\n \"acc_norm_stderr\": 0.02989611429173354\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.0416333199893227,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.0416333199893227\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.24848484848484848,\n \"acc_stderr\": 0.03374402644139404,\n \"acc_norm\": 0.24848484848484848,\n \"acc_norm_stderr\": 0.03374402644139404\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.18181818181818182,\n \"acc_stderr\": 0.027479603010538797,\n \"acc_norm\": 0.18181818181818182,\n \"acc_norm_stderr\": 0.027479603010538797\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.23316062176165803,\n \"acc_stderr\": 0.030516111371476008,\n \"acc_norm\": 0.23316062176165803,\n \"acc_norm_stderr\": 0.030516111371476008\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.24615384615384617,\n \"acc_stderr\": 0.021840866990423084,\n \"acc_norm\": 0.24615384615384617,\n \"acc_norm_stderr\": 0.021840866990423084\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.24444444444444444,\n \"acc_stderr\": 0.02620276653465215,\n \"acc_norm\": 0.24444444444444444,\n \"acc_norm_stderr\": 0.02620276653465215\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.02755361446786382,\n \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.02755361446786382\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.37748344370860926,\n \"acc_stderr\": 0.0395802723112157,\n \"acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.0395802723112157\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.20917431192660552,\n \"acc_stderr\": 0.017437937173343226,\n \"acc_norm\": 0.20917431192660552,\n \"acc_norm_stderr\": 0.017437937173343226\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.3148148148148148,\n \"acc_stderr\": 0.0316746870682898,\n \"acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.0316746870682898\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.2107843137254902,\n \"acc_stderr\": 0.028626547912437388,\n \"acc_norm\": 0.2107843137254902,\n \"acc_norm_stderr\": 0.028626547912437388\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.2911392405063291,\n \"acc_stderr\": 0.02957160106575337,\n \"acc_norm\": 0.2911392405063291,\n \"acc_norm_stderr\": 0.02957160106575337\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.28699551569506726,\n \"acc_stderr\": 0.030360379710291954,\n \"acc_norm\": 0.28699551569506726,\n \"acc_norm_stderr\": 0.030360379710291954\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.24427480916030533,\n \"acc_stderr\": 0.037683359597287434,\n \"acc_norm\": 0.24427480916030533,\n \"acc_norm_stderr\": 0.037683359597287434\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.256198347107438,\n \"acc_stderr\": 0.03984979653302871,\n \"acc_norm\": 0.256198347107438,\n \"acc_norm_stderr\": 0.03984979653302871\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.22085889570552147,\n \"acc_stderr\": 0.032591773927421776,\n \"acc_norm\": 0.22085889570552147,\n \"acc_norm_stderr\": 0.032591773927421776\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.32142857142857145,\n \"acc_stderr\": 0.04432804055291519,\n \"acc_norm\": 0.32142857142857145,\n \"acc_norm_stderr\": 0.04432804055291519\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2948717948717949,\n \"acc_stderr\": 0.029872577708891148,\n \"acc_norm\": 0.2948717948717949,\n \"acc_norm_stderr\": 0.029872577708891148\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23499361430395913,\n \"acc_stderr\": 0.015162024152278434,\n \"acc_norm\": 0.23499361430395913,\n \"acc_norm_stderr\": 0.015162024152278434\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.2514450867052023,\n \"acc_stderr\": 0.023357365785874037,\n \"acc_norm\": 0.2514450867052023,\n \"acc_norm_stderr\": 0.023357365785874037\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.27262569832402234,\n \"acc_stderr\": 0.014893391735249588,\n \"acc_norm\": 0.27262569832402234,\n \"acc_norm_stderr\": 0.014893391735249588\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.024288619466046112,\n \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.024288619466046112\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2733118971061093,\n \"acc_stderr\": 0.025311765975426122,\n \"acc_norm\": 0.2733118971061093,\n \"acc_norm_stderr\": 0.025311765975426122\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.2345679012345679,\n \"acc_stderr\": 0.023576881744005723,\n \"acc_norm\": 0.2345679012345679,\n \"acc_norm_stderr\": 0.023576881744005723\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2198581560283688,\n \"acc_stderr\": 0.02470614107070547,\n \"acc_norm\": 0.2198581560283688,\n \"acc_norm_stderr\": 0.02470614107070547\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.24771838331160365,\n \"acc_stderr\": 0.01102549929144374,\n \"acc_norm\": 0.24771838331160365,\n \"acc_norm_stderr\": 0.01102549929144374\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.29044117647058826,\n \"acc_stderr\": 0.027576468622740512,\n \"acc_norm\": 0.29044117647058826,\n \"acc_norm_stderr\": 0.027576468622740512\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.2630718954248366,\n \"acc_stderr\": 0.017812676542320657,\n \"acc_norm\": 0.2630718954248366,\n \"acc_norm_stderr\": 0.017812676542320657\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.22727272727272727,\n \"acc_stderr\": 0.04013964554072775,\n \"acc_norm\": 0.22727272727272727,\n \"acc_norm_stderr\": 0.04013964554072775\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.20816326530612245,\n \"acc_stderr\": 0.025991117672813292,\n \"acc_norm\": 0.20816326530612245,\n \"acc_norm_stderr\": 0.025991117672813292\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24378109452736318,\n \"acc_stderr\": 0.030360490154014652,\n \"acc_norm\": 0.24378109452736318,\n \"acc_norm_stderr\": 0.030360490154014652\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2710843373493976,\n \"acc_stderr\": 0.034605799075530276,\n \"acc_norm\": 0.2710843373493976,\n \"acc_norm_stderr\": 0.034605799075530276\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.30994152046783624,\n \"acc_stderr\": 0.03546976959393163,\n \"acc_norm\": 0.30994152046783624,\n \"acc_norm_stderr\": 0.03546976959393163\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2386780905752754,\n \"mc1_stderr\": 0.014922629695456418,\n \"mc2\": 0.39979776889587376,\n \"mc2_stderr\": 0.014420445519552157\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5840568271507498,\n \"acc_stderr\": 0.013852485356798255\n },\n \"harness|drop|3\": {\n \"em\": 0.01583473154362416,\n \"em_stderr\": 0.0012784360866061313,\n \"f1\": 0.07108431208053706,\n \"f1_stderr\": 0.0017891407240589372\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006823351023502654,\n \"acc_stderr\": 0.0022675371022544996\n }\n}\n```", "repo_url": "https://huggingface.co/42dot/42dot_LLM-SFT-1.3B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|drop|3_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-47-16.910477.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["**/details_harness|winogrande|5_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-13T15-47-16.910477.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_13T15_47_16.910477", "path": ["results_2023-11-13T15-47-16.910477.parquet"]}, {"split": "latest", "path": ["results_2023-11-13T15-47-16.910477.parquet"]}]}]} | 2023-11-13T15:50:51+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of 42dot/42dot_LLM-SFT-1.3B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model 42dot/42dot_LLM-SFT-1.3B on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-13T15:47:16.910477(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of 42dot/42dot_LLM-SFT-1.3B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model 42dot/42dot_LLM-SFT-1.3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T15:47:16.910477(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of 42dot/42dot_LLM-SFT-1.3B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model 42dot/42dot_LLM-SFT-1.3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T15:47:16.910477(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
23,
31,
172,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of 42dot/42dot_LLM-SFT-1.3B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model 42dot/42dot_LLM-SFT-1.3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-13T15:47:16.910477(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
47fa51f5897e6a1bf288c6625e65ffbd9bd9be84 | # Dataset Card for "Extract-QA-question-answer-with-context"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nlplabtdtu/Extract-QA-question-answer-with-context | [
"region:us"
]
| 2023-11-13T15:52:04+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "response", "struct": [{"name": "response", "dtype": "string"}]}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}, {"name": "instruction", "dtype": "string"}, {"name": "prompt_name", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "max_ratio", "dtype": "float64"}, {"name": "paragraph_similar", "dtype": "string"}, {"name": "start_index", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 21511788, "num_examples": 7597}], "download_size": 8245485, "dataset_size": 21511788}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-14T14:58:40+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "Extract-QA-question-answer-with-context"
More Information needed | [
"# Dataset Card for \"Extract-QA-question-answer-with-context\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"Extract-QA-question-answer-with-context\"\n\nMore Information needed"
]
| [
6,
26
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"Extract-QA-question-answer-with-context\"\n\nMore Information needed"
]
|
78ff34b3b0855ad700b5cbe3b7c01b61fb8b8040 |
# Dataset Card for Evaluation run of PocketDoc/Dans-AdventurousWinds-Mk2-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/PocketDoc/Dans-AdventurousWinds-Mk2-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [PocketDoc/Dans-AdventurousWinds-Mk2-7b](https://huggingface.co/PocketDoc/Dans-AdventurousWinds-Mk2-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_PocketDoc__Dans-AdventurousWinds-Mk2-7b_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-13T15:52:43.892204](https://huggingface.co/datasets/open-llm-leaderboard/details_PocketDoc__Dans-AdventurousWinds-Mk2-7b_public/blob/main/results_2023-11-13T15-52-43.892204.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6117457883588181,
"acc_stderr": 0.03285127869008788,
"acc_norm": 0.621056172344861,
"acc_norm_stderr": 0.033574977794886766,
"mc1": 0.28886168910648713,
"mc1_stderr": 0.01586634640138431,
"mc2": 0.43563008850906,
"mc2_stderr": 0.014459760341061523,
"em": 0.0018875838926174498,
"em_stderr": 0.00044451099905589315,
"f1": 0.06191904362416096,
"f1_stderr": 0.0014055022875998687
},
"harness|arc:challenge|25": {
"acc": 0.53839590443686,
"acc_stderr": 0.014568245550296354,
"acc_norm": 0.5819112627986348,
"acc_norm_stderr": 0.014413988396996077
},
"harness|hellaswag|10": {
"acc": 0.6399123680541725,
"acc_stderr": 0.004790445139186366,
"acc_norm": 0.8347938657637921,
"acc_norm_stderr": 0.003706075184380282
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.04244633238353227,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.04244633238353227
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.625,
"acc_stderr": 0.039397364351956274,
"acc_norm": 0.625,
"acc_norm_stderr": 0.039397364351956274
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6716981132075471,
"acc_stderr": 0.02890159361241178,
"acc_norm": 0.6716981132075471,
"acc_norm_stderr": 0.02890159361241178
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7361111111111112,
"acc_stderr": 0.03685651095897532,
"acc_norm": 0.7361111111111112,
"acc_norm_stderr": 0.03685651095897532
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6242774566473989,
"acc_stderr": 0.036928207672648664,
"acc_norm": 0.6242774566473989,
"acc_norm_stderr": 0.036928207672648664
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3431372549019608,
"acc_stderr": 0.047240073523838876,
"acc_norm": 0.3431372549019608,
"acc_norm_stderr": 0.047240073523838876
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.74,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5787234042553191,
"acc_stderr": 0.03227834510146268,
"acc_norm": 0.5787234042553191,
"acc_norm_stderr": 0.03227834510146268
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.0470070803355104,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.0470070803355104
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4896551724137931,
"acc_stderr": 0.041657747757287644,
"acc_norm": 0.4896551724137931,
"acc_norm_stderr": 0.041657747757287644
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.025279850397404904,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.025279850397404904
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7548387096774194,
"acc_stderr": 0.024472243840895525,
"acc_norm": 0.7548387096774194,
"acc_norm_stderr": 0.024472243840895525
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.65,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.032876667586034906,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.032876667586034906
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.02886977846026705,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.02886977846026705
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8652849740932642,
"acc_stderr": 0.024639789097709443,
"acc_norm": 0.8652849740932642,
"acc_norm_stderr": 0.024639789097709443
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.658974358974359,
"acc_stderr": 0.02403548967633507,
"acc_norm": 0.658974358974359,
"acc_norm_stderr": 0.02403548967633507
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3111111111111111,
"acc_stderr": 0.028226446749683515,
"acc_norm": 0.3111111111111111,
"acc_norm_stderr": 0.028226446749683515
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6848739495798319,
"acc_stderr": 0.030176808288974337,
"acc_norm": 0.6848739495798319,
"acc_norm_stderr": 0.030176808288974337
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7981651376146789,
"acc_stderr": 0.017208579357787575,
"acc_norm": 0.7981651376146789,
"acc_norm_stderr": 0.017208579357787575
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5787037037037037,
"acc_stderr": 0.03367462138896078,
"acc_norm": 0.5787037037037037,
"acc_norm_stderr": 0.03367462138896078
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7794117647058824,
"acc_stderr": 0.02910225438967407,
"acc_norm": 0.7794117647058824,
"acc_norm_stderr": 0.02910225438967407
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7468354430379747,
"acc_stderr": 0.0283046579430353,
"acc_norm": 0.7468354430379747,
"acc_norm_stderr": 0.0283046579430353
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6636771300448431,
"acc_stderr": 0.031708824268455005,
"acc_norm": 0.6636771300448431,
"acc_norm_stderr": 0.031708824268455005
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7557251908396947,
"acc_stderr": 0.03768335959728742,
"acc_norm": 0.7557251908396947,
"acc_norm_stderr": 0.03768335959728742
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.743801652892562,
"acc_stderr": 0.03984979653302871,
"acc_norm": 0.743801652892562,
"acc_norm_stderr": 0.03984979653302871
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.04330043749650742,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.04330043749650742
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7423312883435583,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.7423312883435583,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8504273504273504,
"acc_stderr": 0.02336505149175372,
"acc_norm": 0.8504273504273504,
"acc_norm_stderr": 0.02336505149175372
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.789272030651341,
"acc_stderr": 0.014583812465862541,
"acc_norm": 0.789272030651341,
"acc_norm_stderr": 0.014583812465862541
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6445086705202312,
"acc_stderr": 0.025770292082977254,
"acc_norm": 0.6445086705202312,
"acc_norm_stderr": 0.025770292082977254
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3240223463687151,
"acc_stderr": 0.015652542496421118,
"acc_norm": 0.3240223463687151,
"acc_norm_stderr": 0.015652542496421118
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7156862745098039,
"acc_stderr": 0.02582916327275748,
"acc_norm": 0.7156862745098039,
"acc_norm_stderr": 0.02582916327275748
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6720257234726688,
"acc_stderr": 0.026664410886937613,
"acc_norm": 0.6720257234726688,
"acc_norm_stderr": 0.026664410886937613
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6851851851851852,
"acc_stderr": 0.025842248700902168,
"acc_norm": 0.6851851851851852,
"acc_norm_stderr": 0.025842248700902168
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.44680851063829785,
"acc_stderr": 0.029658235097666907,
"acc_norm": 0.44680851063829785,
"acc_norm_stderr": 0.029658235097666907
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4028683181225554,
"acc_stderr": 0.012526955577118016,
"acc_norm": 0.4028683181225554,
"acc_norm_stderr": 0.012526955577118016
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6617647058823529,
"acc_stderr": 0.028739328513983572,
"acc_norm": 0.6617647058823529,
"acc_norm_stderr": 0.028739328513983572
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6323529411764706,
"acc_stderr": 0.019506291693954854,
"acc_norm": 0.6323529411764706,
"acc_norm_stderr": 0.019506291693954854
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6489795918367347,
"acc_stderr": 0.03055531675557364,
"acc_norm": 0.6489795918367347,
"acc_norm_stderr": 0.03055531675557364
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7910447761194029,
"acc_stderr": 0.028748298931728655,
"acc_norm": 0.7910447761194029,
"acc_norm_stderr": 0.028748298931728655
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8070175438596491,
"acc_stderr": 0.030267457554898458,
"acc_norm": 0.8070175438596491,
"acc_norm_stderr": 0.030267457554898458
},
"harness|truthfulqa:mc|0": {
"mc1": 0.28886168910648713,
"mc1_stderr": 0.01586634640138431,
"mc2": 0.43563008850906,
"mc2_stderr": 0.014459760341061523
},
"harness|winogrande|5": {
"acc": 0.7632202052091555,
"acc_stderr": 0.011947592365207394
},
"harness|drop|3": {
"em": 0.0018875838926174498,
"em_stderr": 0.00044451099905589315,
"f1": 0.06191904362416096,
"f1_stderr": 0.0014055022875998687
},
"harness|gsm8k|5": {
"acc": 0.14935557240333586,
"acc_stderr": 0.009818090723727293
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_PocketDoc__Dans-AdventurousWinds-Mk2-7b | [
"region:us"
]
| 2023-11-13T15:55:45+00:00 | {"pretty_name": "Evaluation run of PocketDoc/Dans-AdventurousWinds-Mk2-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [PocketDoc/Dans-AdventurousWinds-Mk2-7b](https://huggingface.co/PocketDoc/Dans-AdventurousWinds-Mk2-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_PocketDoc__Dans-AdventurousWinds-Mk2-7b_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-13T15:52:43.892204](https://huggingface.co/datasets/open-llm-leaderboard/details_PocketDoc__Dans-AdventurousWinds-Mk2-7b_public/blob/main/results_2023-11-13T15-52-43.892204.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6117457883588181,\n \"acc_stderr\": 0.03285127869008788,\n \"acc_norm\": 0.621056172344861,\n \"acc_norm_stderr\": 0.033574977794886766,\n \"mc1\": 0.28886168910648713,\n \"mc1_stderr\": 0.01586634640138431,\n \"mc2\": 0.43563008850906,\n \"mc2_stderr\": 0.014459760341061523,\n \"em\": 0.0018875838926174498,\n \"em_stderr\": 0.00044451099905589315,\n \"f1\": 0.06191904362416096,\n \"f1_stderr\": 0.0014055022875998687\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.53839590443686,\n \"acc_stderr\": 0.014568245550296354,\n \"acc_norm\": 0.5819112627986348,\n \"acc_norm_stderr\": 0.014413988396996077\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6399123680541725,\n \"acc_stderr\": 0.004790445139186366,\n \"acc_norm\": 0.8347938657637921,\n \"acc_norm_stderr\": 0.003706075184380282\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.04244633238353227,\n \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.04244633238353227\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.625,\n \"acc_stderr\": 0.039397364351956274,\n \"acc_norm\": 0.625,\n \"acc_norm_stderr\": 0.039397364351956274\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6716981132075471,\n \"acc_stderr\": 0.02890159361241178,\n \"acc_norm\": 0.6716981132075471,\n \"acc_norm_stderr\": 0.02890159361241178\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7361111111111112,\n \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.7361111111111112,\n \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3431372549019608,\n \"acc_stderr\": 0.047240073523838876,\n \"acc_norm\": 0.3431372549019608,\n \"acc_norm_stderr\": 0.047240073523838876\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146268,\n \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146268\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.0470070803355104,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.0470070803355104\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.4896551724137931,\n \"acc_stderr\": 0.041657747757287644,\n \"acc_norm\": 0.4896551724137931,\n \"acc_norm_stderr\": 0.041657747757287644\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.025279850397404904,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.025279850397404904\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7548387096774194,\n \"acc_stderr\": 0.024472243840895525,\n \"acc_norm\": 0.7548387096774194,\n \"acc_norm_stderr\": 0.024472243840895525\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.032876667586034906,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.032876667586034906\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.02886977846026705,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.02886977846026705\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8652849740932642,\n \"acc_stderr\": 0.024639789097709443,\n \"acc_norm\": 0.8652849740932642,\n \"acc_norm_stderr\": 0.024639789097709443\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.658974358974359,\n \"acc_stderr\": 0.02403548967633507,\n \"acc_norm\": 0.658974358974359,\n \"acc_norm_stderr\": 0.02403548967633507\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3111111111111111,\n \"acc_stderr\": 0.028226446749683515,\n \"acc_norm\": 0.3111111111111111,\n \"acc_norm_stderr\": 0.028226446749683515\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6848739495798319,\n \"acc_stderr\": 0.030176808288974337,\n \"acc_norm\": 0.6848739495798319,\n \"acc_norm_stderr\": 0.030176808288974337\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7981651376146789,\n \"acc_stderr\": 0.017208579357787575,\n \"acc_norm\": 0.7981651376146789,\n \"acc_norm_stderr\": 0.017208579357787575\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5787037037037037,\n \"acc_stderr\": 0.03367462138896078,\n \"acc_norm\": 0.5787037037037037,\n \"acc_norm_stderr\": 0.03367462138896078\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7794117647058824,\n \"acc_stderr\": 0.02910225438967407,\n \"acc_norm\": 0.7794117647058824,\n \"acc_norm_stderr\": 0.02910225438967407\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7468354430379747,\n \"acc_stderr\": 0.0283046579430353,\n \"acc_norm\": 0.7468354430379747,\n \"acc_norm_stderr\": 0.0283046579430353\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6636771300448431,\n \"acc_stderr\": 0.031708824268455005,\n \"acc_norm\": 0.6636771300448431,\n \"acc_norm_stderr\": 0.031708824268455005\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7557251908396947,\n \"acc_stderr\": 0.03768335959728742,\n \"acc_norm\": 0.7557251908396947,\n \"acc_norm_stderr\": 0.03768335959728742\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.743801652892562,\n \"acc_stderr\": 0.03984979653302871,\n \"acc_norm\": 0.743801652892562,\n \"acc_norm_stderr\": 0.03984979653302871\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.04330043749650742,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.04330043749650742\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7423312883435583,\n \"acc_stderr\": 0.03436150827846917,\n \"acc_norm\": 0.7423312883435583,\n \"acc_norm_stderr\": 0.03436150827846917\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8504273504273504,\n \"acc_stderr\": 0.02336505149175372,\n \"acc_norm\": 0.8504273504273504,\n \"acc_norm_stderr\": 0.02336505149175372\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.789272030651341,\n \"acc_stderr\": 0.014583812465862541,\n \"acc_norm\": 0.789272030651341,\n \"acc_norm_stderr\": 0.014583812465862541\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6445086705202312,\n \"acc_stderr\": 0.025770292082977254,\n \"acc_norm\": 0.6445086705202312,\n \"acc_norm_stderr\": 0.025770292082977254\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3240223463687151,\n \"acc_stderr\": 0.015652542496421118,\n \"acc_norm\": 0.3240223463687151,\n \"acc_norm_stderr\": 0.015652542496421118\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7156862745098039,\n \"acc_stderr\": 0.02582916327275748,\n \"acc_norm\": 0.7156862745098039,\n \"acc_norm_stderr\": 0.02582916327275748\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6720257234726688,\n \"acc_stderr\": 0.026664410886937613,\n \"acc_norm\": 0.6720257234726688,\n \"acc_norm_stderr\": 0.026664410886937613\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6851851851851852,\n \"acc_stderr\": 0.025842248700902168,\n \"acc_norm\": 0.6851851851851852,\n \"acc_norm_stderr\": 0.025842248700902168\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.44680851063829785,\n \"acc_stderr\": 0.029658235097666907,\n \"acc_norm\": 0.44680851063829785,\n \"acc_norm_stderr\": 0.029658235097666907\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4028683181225554,\n \"acc_stderr\": 0.012526955577118016,\n \"acc_norm\": 0.4028683181225554,\n \"acc_norm_stderr\": 0.012526955577118016\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6617647058823529,\n \"acc_stderr\": 0.028739328513983572,\n \"acc_norm\": 0.6617647058823529,\n \"acc_norm_stderr\": 0.028739328513983572\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6323529411764706,\n \"acc_stderr\": 0.019506291693954854,\n \"acc_norm\": 0.6323529411764706,\n \"acc_norm_stderr\": 0.019506291693954854\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6489795918367347,\n \"acc_stderr\": 0.03055531675557364,\n \"acc_norm\": 0.6489795918367347,\n \"acc_norm_stderr\": 0.03055531675557364\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7910447761194029,\n \"acc_stderr\": 0.028748298931728655,\n \"acc_norm\": 0.7910447761194029,\n \"acc_norm_stderr\": 0.028748298931728655\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8070175438596491,\n \"acc_stderr\": 0.030267457554898458,\n \"acc_norm\": 0.8070175438596491,\n \"acc_norm_stderr\": 0.030267457554898458\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.28886168910648713,\n \"mc1_stderr\": 0.01586634640138431,\n \"mc2\": 0.43563008850906,\n \"mc2_stderr\": 0.014459760341061523\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7632202052091555,\n \"acc_stderr\": 0.011947592365207394\n },\n \"harness|drop|3\": {\n \"em\": 0.0018875838926174498,\n \"em_stderr\": 0.00044451099905589315,\n \"f1\": 0.06191904362416096,\n \"f1_stderr\": 0.0014055022875998687\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.14935557240333586,\n \"acc_stderr\": 0.009818090723727293\n }\n}\n```", "repo_url": "https://huggingface.co/PocketDoc/Dans-AdventurousWinds-Mk2-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|drop|3_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-52-43.892204.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["**/details_harness|winogrande|5_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-13T15-52-43.892204.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_13T15_52_43.892204", "path": ["results_2023-11-13T15-52-43.892204.parquet"]}, {"split": "latest", "path": ["results_2023-11-13T15-52-43.892204.parquet"]}]}]} | 2023-11-13T15:56:31+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of PocketDoc/Dans-AdventurousWinds-Mk2-7b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model PocketDoc/Dans-AdventurousWinds-Mk2-7b on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-13T15:52:43.892204(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of PocketDoc/Dans-AdventurousWinds-Mk2-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model PocketDoc/Dans-AdventurousWinds-Mk2-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T15:52:43.892204(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of PocketDoc/Dans-AdventurousWinds-Mk2-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model PocketDoc/Dans-AdventurousWinds-Mk2-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T15:52:43.892204(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
27,
31,
176,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of PocketDoc/Dans-AdventurousWinds-Mk2-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model PocketDoc/Dans-AdventurousWinds-Mk2-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-13T15:52:43.892204(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
949b882d218af9b7dc1f621d4a4cde19be458e92 | # Dataset Card for "GPTextSum2_data-cstnews_results"
rouge= {'rouge1': 0.4955449044644583, 'rouge2': 0.21363254435405743, 'rougeL': 0.291321677352629, 'rougeLsum': 0.291321677352629}
bert= {'precision': 0.7323895692825317, 'recall': 0.7477052390575409, 'f1': 0.739660456776619}
mover = 0.6241053412803379 | arthurmluz/GPTextSum2_data-cstnews_results | [
"region:us"
]
| 2023-11-13T15:57:30+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 104575, "num_examples": 20}], "download_size": 98942, "dataset_size": 104575}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T04:23:33+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "GPTextSum2_data-cstnews_results"
rouge= {'rouge1': 0.4955449044644583, 'rouge2': 0.21363254435405743, 'rougeL': 0.291321677352629, 'rougeLsum': 0.291321677352629}
bert= {'precision': 0.7323895692825317, 'recall': 0.7477052390575409, 'f1': 0.739660456776619}
mover = 0.6241053412803379 | [
"# Dataset Card for \"GPTextSum2_data-cstnews_results\"\n\nrouge= {'rouge1': 0.4955449044644583, 'rouge2': 0.21363254435405743, 'rougeL': 0.291321677352629, 'rougeLsum': 0.291321677352629}\n\nbert= {'precision': 0.7323895692825317, 'recall': 0.7477052390575409, 'f1': 0.739660456776619}\n\nmover = 0.6241053412803379"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"GPTextSum2_data-cstnews_results\"\n\nrouge= {'rouge1': 0.4955449044644583, 'rouge2': 0.21363254435405743, 'rougeL': 0.291321677352629, 'rougeLsum': 0.291321677352629}\n\nbert= {'precision': 0.7323895692825317, 'recall': 0.7477052390575409, 'f1': 0.739660456776619}\n\nmover = 0.6241053412803379"
]
| [
6,
142
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"GPTextSum2_data-cstnews_results\"\n\nrouge= {'rouge1': 0.4955449044644583, 'rouge2': 0.21363254435405743, 'rougeL': 0.291321677352629, 'rougeLsum': 0.291321677352629}\n\nbert= {'precision': 0.7323895692825317, 'recall': 0.7477052390575409, 'f1': 0.739660456776619}\n\nmover = 0.6241053412803379"
]
|
6cd1213deae78c4791fe7701dd19fd03447e1a47 | # Dataset Card for "GPTextSum2_data-temario_results"
rouge= {'rouge1': 0.49515643603635084, 'rouge2': 0.21756085354083887, 'rougeL': 0.3034293883115211, 'rougeLsum': 0.3034293883115211}
bert= {'precision': 0.7198777735233307, 'recall': 0.7549779504537583, 'f1': 0.7367873221635819}
mover = 0.6264011166318602 | arthurmluz/GPTextSum2_data-temario_results | [
"region:us"
]
| 2023-11-13T16:01:23+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 114709, "num_examples": 20}], "download_size": 108791, "dataset_size": 114709}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T04:23:52+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "GPTextSum2_data-temario_results"
rouge= {'rouge1': 0.49515643603635084, 'rouge2': 0.21756085354083887, 'rougeL': 0.3034293883115211, 'rougeLsum': 0.3034293883115211}
bert= {'precision': 0.7198777735233307, 'recall': 0.7549779504537583, 'f1': 0.7367873221635819}
mover = 0.6264011166318602 | [
"# Dataset Card for \"GPTextSum2_data-temario_results\"\n\nrouge= {'rouge1': 0.49515643603635084, 'rouge2': 0.21756085354083887, 'rougeL': 0.3034293883115211, 'rougeLsum': 0.3034293883115211}\n\nbert= {'precision': 0.7198777735233307, 'recall': 0.7549779504537583, 'f1': 0.7367873221635819}\n\nmover = 0.6264011166318602"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"GPTextSum2_data-temario_results\"\n\nrouge= {'rouge1': 0.49515643603635084, 'rouge2': 0.21756085354083887, 'rougeL': 0.3034293883115211, 'rougeLsum': 0.3034293883115211}\n\nbert= {'precision': 0.7198777735233307, 'recall': 0.7549779504537583, 'f1': 0.7367873221635819}\n\nmover = 0.6264011166318602"
]
| [
6,
145
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"GPTextSum2_data-temario_results\"\n\nrouge= {'rouge1': 0.49515643603635084, 'rouge2': 0.21756085354083887, 'rougeL': 0.3034293883115211, 'rougeLsum': 0.3034293883115211}\n\nbert= {'precision': 0.7198777735233307, 'recall': 0.7549779504537583, 'f1': 0.7367873221635819}\n\nmover = 0.6264011166318602"
]
|
2902292eb76bc580ccf8fc1f45c613b5b0dc48fc |
# Dataset Card for Evaluation run of w95/megachat
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/w95/megachat
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [w95/megachat](https://huggingface.co/w95/megachat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_w95__megachat_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-13T15:59:20.049368](https://huggingface.co/datasets/open-llm-leaderboard/details_w95__megachat_public/blob/main/results_2023-11-13T15-59-20.049368.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.25936163462487405,
"acc_stderr": 0.03091692313677521,
"acc_norm": 0.26122603283331186,
"acc_norm_stderr": 0.031692702511721224,
"mc1": 0.24479804161566707,
"mc1_stderr": 0.015051869486715014,
"mc2": 0.39854544628414945,
"mc2_stderr": 0.014106781910887378,
"em": 0.0006291946308724832,
"em_stderr": 0.00025680027497239604,
"f1": 0.041603397651006783,
"f1_stderr": 0.0011146754682383132
},
"harness|arc:challenge|25": {
"acc": 0.27047781569965873,
"acc_stderr": 0.012980954547659554,
"acc_norm": 0.30802047781569963,
"acc_norm_stderr": 0.01349142951729204
},
"harness|hellaswag|10": {
"acc": 0.4100776737701653,
"acc_stderr": 0.004908423147162023,
"acc_norm": 0.5435172276438957,
"acc_norm_stderr": 0.004970846697552307
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04072314811876837,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04072314811876837
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.3092105263157895,
"acc_stderr": 0.03761070869867479,
"acc_norm": 0.3092105263157895,
"acc_norm_stderr": 0.03761070869867479
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.21132075471698114,
"acc_stderr": 0.02512576648482784,
"acc_norm": 0.21132075471698114,
"acc_norm_stderr": 0.02512576648482784
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.17,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.17,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909284,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909284
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.24855491329479767,
"acc_stderr": 0.03295304696818318,
"acc_norm": 0.24855491329479767,
"acc_norm_stderr": 0.03295304696818318
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.22549019607843138,
"acc_stderr": 0.041583075330832865,
"acc_norm": 0.22549019607843138,
"acc_norm_stderr": 0.041583075330832865
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.20425531914893616,
"acc_stderr": 0.026355158413349424,
"acc_norm": 0.20425531914893616,
"acc_norm_stderr": 0.026355158413349424
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.04142439719489362,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.04142439719489362
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.296551724137931,
"acc_stderr": 0.03806142687309993,
"acc_norm": 0.296551724137931,
"acc_norm_stderr": 0.03806142687309993
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2698412698412698,
"acc_stderr": 0.022860838309232072,
"acc_norm": 0.2698412698412698,
"acc_norm_stderr": 0.022860838309232072
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.15079365079365079,
"acc_stderr": 0.03200686497287392,
"acc_norm": 0.15079365079365079,
"acc_norm_stderr": 0.03200686497287392
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.25161290322580643,
"acc_stderr": 0.024685979286239956,
"acc_norm": 0.25161290322580643,
"acc_norm_stderr": 0.024685979286239956
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2660098522167488,
"acc_stderr": 0.031089826002937523,
"acc_norm": 0.2660098522167488,
"acc_norm_stderr": 0.031089826002937523
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.26666666666666666,
"acc_stderr": 0.03453131801885415,
"acc_norm": 0.26666666666666666,
"acc_norm_stderr": 0.03453131801885415
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.20202020202020202,
"acc_stderr": 0.02860620428922988,
"acc_norm": 0.20202020202020202,
"acc_norm_stderr": 0.02860620428922988
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.2538860103626943,
"acc_stderr": 0.03141024780565319,
"acc_norm": 0.2538860103626943,
"acc_norm_stderr": 0.03141024780565319
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2230769230769231,
"acc_stderr": 0.021107730127243995,
"acc_norm": 0.2230769230769231,
"acc_norm_stderr": 0.021107730127243995
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.26666666666666666,
"acc_stderr": 0.02696242432507383,
"acc_norm": 0.26666666666666666,
"acc_norm_stderr": 0.02696242432507383
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.21008403361344538,
"acc_stderr": 0.026461398717471874,
"acc_norm": 0.21008403361344538,
"acc_norm_stderr": 0.026461398717471874
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.271523178807947,
"acc_stderr": 0.03631329803969653,
"acc_norm": 0.271523178807947,
"acc_norm_stderr": 0.03631329803969653
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.181651376146789,
"acc_stderr": 0.01653061740926686,
"acc_norm": 0.181651376146789,
"acc_norm_stderr": 0.01653061740926686
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.21296296296296297,
"acc_stderr": 0.02792096314799366,
"acc_norm": 0.21296296296296297,
"acc_norm_stderr": 0.02792096314799366
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.2549019607843137,
"acc_stderr": 0.030587591351604243,
"acc_norm": 0.2549019607843137,
"acc_norm_stderr": 0.030587591351604243
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.25738396624472576,
"acc_stderr": 0.028458820991460302,
"acc_norm": 0.25738396624472576,
"acc_norm_stderr": 0.028458820991460302
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.20179372197309417,
"acc_stderr": 0.026936111912802273,
"acc_norm": 0.20179372197309417,
"acc_norm_stderr": 0.026936111912802273
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.24427480916030533,
"acc_stderr": 0.03768335959728745,
"acc_norm": 0.24427480916030533,
"acc_norm_stderr": 0.03768335959728745
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.36363636363636365,
"acc_stderr": 0.04391326286724071,
"acc_norm": 0.36363636363636365,
"acc_norm_stderr": 0.04391326286724071
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.041331194402438376,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.041331194402438376
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.3006134969325153,
"acc_stderr": 0.03602511318806771,
"acc_norm": 0.3006134969325153,
"acc_norm_stderr": 0.03602511318806771
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.22321428571428573,
"acc_stderr": 0.039523019677025116,
"acc_norm": 0.22321428571428573,
"acc_norm_stderr": 0.039523019677025116
},
"harness|hendrycksTest-management|5": {
"acc": 0.18446601941747573,
"acc_stderr": 0.03840423627288276,
"acc_norm": 0.18446601941747573,
"acc_norm_stderr": 0.03840423627288276
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2606837606837607,
"acc_stderr": 0.028760348956523414,
"acc_norm": 0.2606837606837607,
"acc_norm_stderr": 0.028760348956523414
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.25287356321839083,
"acc_stderr": 0.015543377313719681,
"acc_norm": 0.25287356321839083,
"acc_norm_stderr": 0.015543377313719681
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.29190751445086704,
"acc_stderr": 0.02447699407624734,
"acc_norm": 0.29190751445086704,
"acc_norm_stderr": 0.02447699407624734
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24692737430167597,
"acc_stderr": 0.014422292204808835,
"acc_norm": 0.24692737430167597,
"acc_norm_stderr": 0.014422292204808835
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.24836601307189543,
"acc_stderr": 0.02473998135511359,
"acc_norm": 0.24836601307189543,
"acc_norm_stderr": 0.02473998135511359
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2990353697749196,
"acc_stderr": 0.026003301117885135,
"acc_norm": 0.2990353697749196,
"acc_norm_stderr": 0.026003301117885135
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.31790123456790126,
"acc_stderr": 0.025910063528240875,
"acc_norm": 0.31790123456790126,
"acc_norm_stderr": 0.025910063528240875
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.26595744680851063,
"acc_stderr": 0.02635806569888059,
"acc_norm": 0.26595744680851063,
"acc_norm_stderr": 0.02635806569888059
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.27183833116036504,
"acc_stderr": 0.011363135278651411,
"acc_norm": 0.27183833116036504,
"acc_norm_stderr": 0.011363135278651411
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.17279411764705882,
"acc_stderr": 0.022966067585581756,
"acc_norm": 0.17279411764705882,
"acc_norm_stderr": 0.022966067585581756
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2696078431372549,
"acc_stderr": 0.017952449196987866,
"acc_norm": 0.2696078431372549,
"acc_norm_stderr": 0.017952449196987866
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.22727272727272727,
"acc_stderr": 0.040139645540727735,
"acc_norm": 0.22727272727272727,
"acc_norm_stderr": 0.040139645540727735
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.24897959183673468,
"acc_stderr": 0.027682979522960227,
"acc_norm": 0.24897959183673468,
"acc_norm_stderr": 0.027682979522960227
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24875621890547264,
"acc_stderr": 0.030567675938916707,
"acc_norm": 0.24875621890547264,
"acc_norm_stderr": 0.030567675938916707
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.27,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.27,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-virology|5": {
"acc": 0.2469879518072289,
"acc_stderr": 0.03357351982064536,
"acc_norm": 0.2469879518072289,
"acc_norm_stderr": 0.03357351982064536
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.3742690058479532,
"acc_stderr": 0.037116011853894806,
"acc_norm": 0.3742690058479532,
"acc_norm_stderr": 0.037116011853894806
},
"harness|truthfulqa:mc|0": {
"mc1": 0.24479804161566707,
"mc1_stderr": 0.015051869486715014,
"mc2": 0.39854544628414945,
"mc2_stderr": 0.014106781910887378
},
"harness|winogrande|5": {
"acc": 0.5698500394632992,
"acc_stderr": 0.013914685094716698
},
"harness|drop|3": {
"em": 0.0006291946308724832,
"em_stderr": 0.00025680027497239604,
"f1": 0.041603397651006783,
"f1_stderr": 0.0011146754682383132
},
"harness|gsm8k|5": {
"acc": 0.009855951478392721,
"acc_stderr": 0.0027210765770416586
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_w95__megachat | [
"region:us"
]
| 2023-11-13T16:01:50+00:00 | {"pretty_name": "Evaluation run of w95/megachat", "dataset_summary": "Dataset automatically created during the evaluation run of model [w95/megachat](https://huggingface.co/w95/megachat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_w95__megachat_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-13T15:59:20.049368](https://huggingface.co/datasets/open-llm-leaderboard/details_w95__megachat_public/blob/main/results_2023-11-13T15-59-20.049368.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.25936163462487405,\n \"acc_stderr\": 0.03091692313677521,\n \"acc_norm\": 0.26122603283331186,\n \"acc_norm_stderr\": 0.031692702511721224,\n \"mc1\": 0.24479804161566707,\n \"mc1_stderr\": 0.015051869486715014,\n \"mc2\": 0.39854544628414945,\n \"mc2_stderr\": 0.014106781910887378,\n \"em\": 0.0006291946308724832,\n \"em_stderr\": 0.00025680027497239604,\n \"f1\": 0.041603397651006783,\n \"f1_stderr\": 0.0011146754682383132\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.27047781569965873,\n \"acc_stderr\": 0.012980954547659554,\n \"acc_norm\": 0.30802047781569963,\n \"acc_norm_stderr\": 0.01349142951729204\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.4100776737701653,\n \"acc_stderr\": 0.004908423147162023,\n \"acc_norm\": 0.5435172276438957,\n \"acc_norm_stderr\": 0.004970846697552307\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.3092105263157895,\n \"acc_stderr\": 0.03761070869867479,\n \"acc_norm\": 0.3092105263157895,\n \"acc_norm_stderr\": 0.03761070869867479\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.21132075471698114,\n \"acc_stderr\": 0.02512576648482784,\n \"acc_norm\": 0.21132075471698114,\n \"acc_norm_stderr\": 0.02512576648482784\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.17,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.17,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909284,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909284\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.24855491329479767,\n \"acc_stderr\": 0.03295304696818318,\n \"acc_norm\": 0.24855491329479767,\n \"acc_norm_stderr\": 0.03295304696818318\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.22549019607843138,\n \"acc_stderr\": 0.041583075330832865,\n \"acc_norm\": 0.22549019607843138,\n \"acc_norm_stderr\": 0.041583075330832865\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.20425531914893616,\n \"acc_stderr\": 0.026355158413349424,\n \"acc_norm\": 0.20425531914893616,\n \"acc_norm_stderr\": 0.026355158413349424\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.04142439719489362,\n \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.04142439719489362\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.296551724137931,\n \"acc_stderr\": 0.03806142687309993,\n \"acc_norm\": 0.296551724137931,\n \"acc_norm_stderr\": 0.03806142687309993\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2698412698412698,\n \"acc_stderr\": 0.022860838309232072,\n \"acc_norm\": 0.2698412698412698,\n \"acc_norm_stderr\": 0.022860838309232072\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.15079365079365079,\n \"acc_stderr\": 0.03200686497287392,\n \"acc_norm\": 0.15079365079365079,\n \"acc_norm_stderr\": 0.03200686497287392\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.25161290322580643,\n \"acc_stderr\": 0.024685979286239956,\n \"acc_norm\": 0.25161290322580643,\n \"acc_norm_stderr\": 0.024685979286239956\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.2660098522167488,\n \"acc_stderr\": 0.031089826002937523,\n \"acc_norm\": 0.2660098522167488,\n \"acc_norm_stderr\": 0.031089826002937523\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.26666666666666666,\n \"acc_stderr\": 0.03453131801885415,\n \"acc_norm\": 0.26666666666666666,\n \"acc_norm_stderr\": 0.03453131801885415\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.20202020202020202,\n \"acc_stderr\": 0.02860620428922988,\n \"acc_norm\": 0.20202020202020202,\n \"acc_norm_stderr\": 0.02860620428922988\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.2538860103626943,\n \"acc_stderr\": 0.03141024780565319,\n \"acc_norm\": 0.2538860103626943,\n \"acc_norm_stderr\": 0.03141024780565319\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2230769230769231,\n \"acc_stderr\": 0.021107730127243995,\n \"acc_norm\": 0.2230769230769231,\n \"acc_norm_stderr\": 0.021107730127243995\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26666666666666666,\n \"acc_stderr\": 0.02696242432507383,\n \"acc_norm\": 0.26666666666666666,\n \"acc_norm_stderr\": 0.02696242432507383\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.21008403361344538,\n \"acc_stderr\": 0.026461398717471874,\n \"acc_norm\": 0.21008403361344538,\n \"acc_norm_stderr\": 0.026461398717471874\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.271523178807947,\n \"acc_stderr\": 0.03631329803969653,\n \"acc_norm\": 0.271523178807947,\n \"acc_norm_stderr\": 0.03631329803969653\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.181651376146789,\n \"acc_stderr\": 0.01653061740926686,\n \"acc_norm\": 0.181651376146789,\n \"acc_norm_stderr\": 0.01653061740926686\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.21296296296296297,\n \"acc_stderr\": 0.02792096314799366,\n \"acc_norm\": 0.21296296296296297,\n \"acc_norm_stderr\": 0.02792096314799366\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.2549019607843137,\n \"acc_stderr\": 0.030587591351604243,\n \"acc_norm\": 0.2549019607843137,\n \"acc_norm_stderr\": 0.030587591351604243\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.25738396624472576,\n \"acc_stderr\": 0.028458820991460302,\n \"acc_norm\": 0.25738396624472576,\n \"acc_norm_stderr\": 0.028458820991460302\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.20179372197309417,\n \"acc_stderr\": 0.026936111912802273,\n \"acc_norm\": 0.20179372197309417,\n \"acc_norm_stderr\": 0.026936111912802273\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.24427480916030533,\n \"acc_stderr\": 0.03768335959728745,\n \"acc_norm\": 0.24427480916030533,\n \"acc_norm_stderr\": 0.03768335959728745\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.36363636363636365,\n \"acc_stderr\": 0.04391326286724071,\n \"acc_norm\": 0.36363636363636365,\n \"acc_norm_stderr\": 0.04391326286724071\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.24074074074074073,\n \"acc_stderr\": 0.041331194402438376,\n \"acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.041331194402438376\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.3006134969325153,\n \"acc_stderr\": 0.03602511318806771,\n \"acc_norm\": 0.3006134969325153,\n \"acc_norm_stderr\": 0.03602511318806771\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.22321428571428573,\n \"acc_stderr\": 0.039523019677025116,\n \"acc_norm\": 0.22321428571428573,\n \"acc_norm_stderr\": 0.039523019677025116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.18446601941747573,\n \"acc_stderr\": 0.03840423627288276,\n \"acc_norm\": 0.18446601941747573,\n \"acc_norm_stderr\": 0.03840423627288276\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2606837606837607,\n \"acc_stderr\": 0.028760348956523414,\n \"acc_norm\": 0.2606837606837607,\n \"acc_norm_stderr\": 0.028760348956523414\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.25287356321839083,\n \"acc_stderr\": 0.015543377313719681,\n \"acc_norm\": 0.25287356321839083,\n \"acc_norm_stderr\": 0.015543377313719681\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.29190751445086704,\n \"acc_stderr\": 0.02447699407624734,\n \"acc_norm\": 0.29190751445086704,\n \"acc_norm_stderr\": 0.02447699407624734\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.24836601307189543,\n \"acc_stderr\": 0.02473998135511359,\n \"acc_norm\": 0.24836601307189543,\n \"acc_norm_stderr\": 0.02473998135511359\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2990353697749196,\n \"acc_stderr\": 0.026003301117885135,\n \"acc_norm\": 0.2990353697749196,\n \"acc_norm_stderr\": 0.026003301117885135\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.31790123456790126,\n \"acc_stderr\": 0.025910063528240875,\n \"acc_norm\": 0.31790123456790126,\n \"acc_norm_stderr\": 0.025910063528240875\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.26595744680851063,\n \"acc_stderr\": 0.02635806569888059,\n \"acc_norm\": 0.26595744680851063,\n \"acc_norm_stderr\": 0.02635806569888059\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.27183833116036504,\n \"acc_stderr\": 0.011363135278651411,\n \"acc_norm\": 0.27183833116036504,\n \"acc_norm_stderr\": 0.011363135278651411\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.17279411764705882,\n \"acc_stderr\": 0.022966067585581756,\n \"acc_norm\": 0.17279411764705882,\n \"acc_norm_stderr\": 0.022966067585581756\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.2696078431372549,\n \"acc_stderr\": 0.017952449196987866,\n \"acc_norm\": 0.2696078431372549,\n \"acc_norm_stderr\": 0.017952449196987866\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.22727272727272727,\n \"acc_stderr\": 0.040139645540727735,\n \"acc_norm\": 0.22727272727272727,\n \"acc_norm_stderr\": 0.040139645540727735\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.24897959183673468,\n \"acc_stderr\": 0.027682979522960227,\n \"acc_norm\": 0.24897959183673468,\n \"acc_norm_stderr\": 0.027682979522960227\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24875621890547264,\n \"acc_stderr\": 0.030567675938916707,\n \"acc_norm\": 0.24875621890547264,\n \"acc_norm_stderr\": 0.030567675938916707\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2469879518072289,\n \"acc_stderr\": 0.03357351982064536,\n \"acc_norm\": 0.2469879518072289,\n \"acc_norm_stderr\": 0.03357351982064536\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.3742690058479532,\n \"acc_stderr\": 0.037116011853894806,\n \"acc_norm\": 0.3742690058479532,\n \"acc_norm_stderr\": 0.037116011853894806\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.24479804161566707,\n \"mc1_stderr\": 0.015051869486715014,\n \"mc2\": 0.39854544628414945,\n \"mc2_stderr\": 0.014106781910887378\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5698500394632992,\n \"acc_stderr\": 0.013914685094716698\n },\n \"harness|drop|3\": {\n \"em\": 0.0006291946308724832,\n \"em_stderr\": 0.00025680027497239604,\n \"f1\": 0.041603397651006783,\n \"f1_stderr\": 0.0011146754682383132\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.009855951478392721,\n \"acc_stderr\": 0.0027210765770416586\n }\n}\n```", "repo_url": "https://huggingface.co/w95/megachat", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|drop|3_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T15-59-20.049368.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["**/details_harness|winogrande|5_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-13T15-59-20.049368.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_13T15_59_20.049368", "path": ["results_2023-11-13T15-59-20.049368.parquet"]}, {"split": "latest", "path": ["results_2023-11-13T15-59-20.049368.parquet"]}]}]} | 2023-11-13T16:02:37+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of w95/megachat
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model w95/megachat on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-13T15:59:20.049368(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of w95/megachat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model w95/megachat on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T15:59:20.049368(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of w95/megachat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model w95/megachat on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T15:59:20.049368(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
14,
31,
163,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of w95/megachat## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model w95/megachat on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-13T15:59:20.049368(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
70c897ca1db9b9da849cfbf1c03b13e0f78cf63d | # Dataset Card for "GPTextSum2_data-wiki_results"
rouge= {'rouge1': 0.14879950953478574, 'rouge2': 0.05639333842690715, 'rougeL': 0.11391066004959453, 'rougeLsum': 0.11391066004959453}
bert= {'precision': 0.7345356345176697, 'recall': 0.6217012137174607, 'f1': 0.6732800990343094}
mover 0.5359516886300142 | arthurmluz/GPTextSum2_data-wiki_results | [
"region:us"
]
| 2023-11-13T16:08:21+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 83103, "num_examples": 20}], "download_size": 80493, "dataset_size": 83103}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T04:25:39+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "GPTextSum2_data-wiki_results"
rouge= {'rouge1': 0.14879950953478574, 'rouge2': 0.05639333842690715, 'rougeL': 0.11391066004959453, 'rougeLsum': 0.11391066004959453}
bert= {'precision': 0.7345356345176697, 'recall': 0.6217012137174607, 'f1': 0.6732800990343094}
mover 0.5359516886300142 | [
"# Dataset Card for \"GPTextSum2_data-wiki_results\"\n\nrouge= {'rouge1': 0.14879950953478574, 'rouge2': 0.05639333842690715, 'rougeL': 0.11391066004959453, 'rougeLsum': 0.11391066004959453}\n\nbert= {'precision': 0.7345356345176697, 'recall': 0.6217012137174607, 'f1': 0.6732800990343094}\n\nmover 0.5359516886300142"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"GPTextSum2_data-wiki_results\"\n\nrouge= {'rouge1': 0.14879950953478574, 'rouge2': 0.05639333842690715, 'rougeL': 0.11391066004959453, 'rougeLsum': 0.11391066004959453}\n\nbert= {'precision': 0.7345356345176697, 'recall': 0.6217012137174607, 'f1': 0.6732800990343094}\n\nmover 0.5359516886300142"
]
| [
6,
135
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"GPTextSum2_data-wiki_results\"\n\nrouge= {'rouge1': 0.14879950953478574, 'rouge2': 0.05639333842690715, 'rougeL': 0.11391066004959453, 'rougeLsum': 0.11391066004959453}\n\nbert= {'precision': 0.7345356345176697, 'recall': 0.6217012137174607, 'f1': 0.6732800990343094}\n\nmover 0.5359516886300142"
]
|
7ae9c1e8966f21fa1e9b877627a8f16c6aae9409 | # Dataset Card for "GPTextSum2_data-wiki_1024_results"
rouge= {'rouge1': 0.16678547787209053, 'rouge2': 0.06085047629289851, 'rougeL': 0.12034060548656048, 'rougeLsum': 0.12034060548656048}
bert= {'precision': 0.7333554565906525, 'recall': 0.6233203381299972, 'f1': 0.6735965102910996}
mover =0.5412529684132814 | arthurmluz/GPTextSum2_data-wiki_1024_results | [
"region:us"
]
| 2023-11-13T16:12:38+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 83374, "num_examples": 20}], "download_size": 80632, "dataset_size": 83374}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T04:25:55+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "GPTextSum2_data-wiki_1024_results"
rouge= {'rouge1': 0.16678547787209053, 'rouge2': 0.06085047629289851, 'rougeL': 0.12034060548656048, 'rougeLsum': 0.12034060548656048}
bert= {'precision': 0.7333554565906525, 'recall': 0.6233203381299972, 'f1': 0.6735965102910996}
mover =0.5412529684132814 | [
"# Dataset Card for \"GPTextSum2_data-wiki_1024_results\"\n\nrouge= {'rouge1': 0.16678547787209053, 'rouge2': 0.06085047629289851, 'rougeL': 0.12034060548656048, 'rougeLsum': 0.12034060548656048}\n\nbert= {'precision': 0.7333554565906525, 'recall': 0.6233203381299972, 'f1': 0.6735965102910996}\n\nmover =0.5412529684132814"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"GPTextSum2_data-wiki_1024_results\"\n\nrouge= {'rouge1': 0.16678547787209053, 'rouge2': 0.06085047629289851, 'rougeL': 0.12034060548656048, 'rougeLsum': 0.12034060548656048}\n\nbert= {'precision': 0.7333554565906525, 'recall': 0.6233203381299972, 'f1': 0.6735965102910996}\n\nmover =0.5412529684132814"
]
| [
6,
140
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"GPTextSum2_data-wiki_1024_results\"\n\nrouge= {'rouge1': 0.16678547787209053, 'rouge2': 0.06085047629289851, 'rougeL': 0.12034060548656048, 'rougeLsum': 0.12034060548656048}\n\nbert= {'precision': 0.7333554565906525, 'recall': 0.6233203381299972, 'f1': 0.6735965102910996}\n\nmover =0.5412529684132814"
]
|
01dc205815d66b0c04be5f013c5afd9ff119e7e1 | # Dataset Card for "GPTextSum2_data-wiki_cstnews_results"
rouge= {'rouge1': 0.40559145209215386, 'rouge2': 0.1858323707445477, 'rougeL': 0.2713738809702273, 'rougeLsum': 0.2713738809702273}
bert= {'precision': 0.7676798492670059, 'recall': 0.7191876947879792, 'f1': 0.7423095703125}
mover = 0.6047207310084797 | arthurmluz/GPTextSum2_data-wiki_cstnews_results | [
"region:us"
]
| 2023-11-13T16:16:03+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 92922, "num_examples": 20}], "download_size": 89357, "dataset_size": 92922}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T04:26:20+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "GPTextSum2_data-wiki_cstnews_results"
rouge= {'rouge1': 0.40559145209215386, 'rouge2': 0.1858323707445477, 'rougeL': 0.2713738809702273, 'rougeLsum': 0.2713738809702273}
bert= {'precision': 0.7676798492670059, 'recall': 0.7191876947879792, 'f1': 0.7423095703125}
mover = 0.6047207310084797 | [
"# Dataset Card for \"GPTextSum2_data-wiki_cstnews_results\"\n\nrouge= {'rouge1': 0.40559145209215386, 'rouge2': 0.1858323707445477, 'rougeL': 0.2713738809702273, 'rougeLsum': 0.2713738809702273}\n\nbert= {'precision': 0.7676798492670059, 'recall': 0.7191876947879792, 'f1': 0.7423095703125}\n\nmover = 0.6047207310084797"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"GPTextSum2_data-wiki_cstnews_results\"\n\nrouge= {'rouge1': 0.40559145209215386, 'rouge2': 0.1858323707445477, 'rougeL': 0.2713738809702273, 'rougeLsum': 0.2713738809702273}\n\nbert= {'precision': 0.7676798492670059, 'recall': 0.7191876947879792, 'f1': 0.7423095703125}\n\nmover = 0.6047207310084797"
]
| [
6,
145
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"GPTextSum2_data-wiki_cstnews_results\"\n\nrouge= {'rouge1': 0.40559145209215386, 'rouge2': 0.1858323707445477, 'rougeL': 0.2713738809702273, 'rougeLsum': 0.2713738809702273}\n\nbert= {'precision': 0.7676798492670059, 'recall': 0.7191876947879792, 'f1': 0.7423095703125}\n\nmover = 0.6047207310084797"
]
|
057ded2cd9ce73ea72e0a1a78d2fafab9a6f2493 | # Dataset Card for "GPTextSum2_data-wiki_cstnews_1024_results"
rouge= {'rouge1': 0.4193415683921648, 'rouge2': 0.1880750266325746, 'rougeL': 0.26987694908006155, 'rougeLsum': 0.26987694908006155}
bert= {'precision': 0.767864066362381, 'recall': 0.7225012689828872, 'f1': 0.7442910760641098}
mover = 0.6068481625206206 | arthurmluz/GPTextSum2_data-wiki_cstnews_1024_results | [
"region:us"
]
| 2023-11-13T16:18:28+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 93413, "num_examples": 20}], "download_size": 90435, "dataset_size": 93413}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T04:26:37+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "GPTextSum2_data-wiki_cstnews_1024_results"
rouge= {'rouge1': 0.4193415683921648, 'rouge2': 0.1880750266325746, 'rougeL': 0.26987694908006155, 'rougeLsum': 0.26987694908006155}
bert= {'precision': 0.767864066362381, 'recall': 0.7225012689828872, 'f1': 0.7442910760641098}
mover = 0.6068481625206206 | [
"# Dataset Card for \"GPTextSum2_data-wiki_cstnews_1024_results\"\n\nrouge= {'rouge1': 0.4193415683921648, 'rouge2': 0.1880750266325746, 'rougeL': 0.26987694908006155, 'rougeLsum': 0.26987694908006155}\n\nbert= {'precision': 0.767864066362381, 'recall': 0.7225012689828872, 'f1': 0.7442910760641098}\n\nmover = 0.6068481625206206"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"GPTextSum2_data-wiki_cstnews_1024_results\"\n\nrouge= {'rouge1': 0.4193415683921648, 'rouge2': 0.1880750266325746, 'rougeL': 0.26987694908006155, 'rougeLsum': 0.26987694908006155}\n\nbert= {'precision': 0.767864066362381, 'recall': 0.7225012689828872, 'f1': 0.7442910760641098}\n\nmover = 0.6068481625206206"
]
| [
6,
146
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"GPTextSum2_data-wiki_cstnews_1024_results\"\n\nrouge= {'rouge1': 0.4193415683921648, 'rouge2': 0.1880750266325746, 'rougeL': 0.26987694908006155, 'rougeLsum': 0.26987694908006155}\n\nbert= {'precision': 0.767864066362381, 'recall': 0.7225012689828872, 'f1': 0.7442910760641098}\n\nmover = 0.6068481625206206"
]
|
5a97195cc386f78834d8133d0d91fe0e59fff42a | # Dataset Card for "GPTextSum2_data-wiki_temario_results"
rouge= {'rouge1': 0.45050324572461636, 'rouge2': 0.2010668922579611, 'rougeL': 0.28295192431911953, 'rougeLsum': 0.28295192431911953}
bert= {'precision': 0.7481734365224838, 'recall': 0.736190888285637, 'f1': 0.7416769295930863}
mover 0.6111134514384304 | arthurmluz/GPTextSum2_data-wiki_temario_results | [
"region:us"
]
| 2023-11-13T16:21:11+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 99153, "num_examples": 20}], "download_size": 96177, "dataset_size": 99153}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T04:27:00+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "GPTextSum2_data-wiki_temario_results"
rouge= {'rouge1': 0.45050324572461636, 'rouge2': 0.2010668922579611, 'rougeL': 0.28295192431911953, 'rougeLsum': 0.28295192431911953}
bert= {'precision': 0.7481734365224838, 'recall': 0.736190888285637, 'f1': 0.7416769295930863}
mover 0.6111134514384304 | [
"# Dataset Card for \"GPTextSum2_data-wiki_temario_results\"\n\nrouge= {'rouge1': 0.45050324572461636, 'rouge2': 0.2010668922579611, 'rougeL': 0.28295192431911953, 'rougeLsum': 0.28295192431911953}\n\nbert= {'precision': 0.7481734365224838, 'recall': 0.736190888285637, 'f1': 0.7416769295930863}\n\nmover 0.6111134514384304"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"GPTextSum2_data-wiki_temario_results\"\n\nrouge= {'rouge1': 0.45050324572461636, 'rouge2': 0.2010668922579611, 'rougeL': 0.28295192431911953, 'rougeLsum': 0.28295192431911953}\n\nbert= {'precision': 0.7481734365224838, 'recall': 0.736190888285637, 'f1': 0.7416769295930863}\n\nmover 0.6111134514384304"
]
| [
6,
143
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"GPTextSum2_data-wiki_temario_results\"\n\nrouge= {'rouge1': 0.45050324572461636, 'rouge2': 0.2010668922579611, 'rougeL': 0.28295192431911953, 'rougeLsum': 0.28295192431911953}\n\nbert= {'precision': 0.7481734365224838, 'recall': 0.736190888285637, 'f1': 0.7416769295930863}\n\nmover 0.6111134514384304"
]
|
e6e0e04dbba30c09d9661ddbb440dce1ea508e5a | This CSV contains voice lines up to Between Facades and Familiar Faces
Duration: January 19, 2023, 10:00:00 AM – February 06, 2023, 03:59:59 AM | Epharedam/xiaocsv | [
"region:us"
]
| 2023-11-13T16:32:19+00:00 | {} | 2023-11-13T16:34:07+00:00 | []
| []
| TAGS
#region-us
| This CSV contains voice lines up to Between Facades and Familiar Faces
Duration: January 19, 2023, 10:00:00 AM – February 06, 2023, 03:59:59 AM | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
e74af1328d0d716e85d34e2787c2f858dd850a61 | # Dataset Card for "GPTextSum2_data-wiki_gptextsum2_results"
rouge= {'rouge1': 0.4600676970614709, 'rouge2': 0.2024089594170197, 'rougeL': 0.28630530856939856, 'rougeLsum': 0.28630530856939856}
bert= {'precision': 0.7757186979055405, 'recall': 0.7327599436044693, 'f1': 0.7533363491296768}
mover = 0.6147837362634168 | arthurmluz/GPTextSum2_data-wiki_gptextsum2_results | [
"region:us"
]
| 2023-11-13T16:36:37+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 93872, "num_examples": 20}], "download_size": 90986, "dataset_size": 93872}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T04:24:04+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "GPTextSum2_data-wiki_gptextsum2_results"
rouge= {'rouge1': 0.4600676970614709, 'rouge2': 0.2024089594170197, 'rougeL': 0.28630530856939856, 'rougeLsum': 0.28630530856939856}
bert= {'precision': 0.7757186979055405, 'recall': 0.7327599436044693, 'f1': 0.7533363491296768}
mover = 0.6147837362634168 | [
"# Dataset Card for \"GPTextSum2_data-wiki_gptextsum2_results\"\n\nrouge= {'rouge1': 0.4600676970614709, 'rouge2': 0.2024089594170197, 'rougeL': 0.28630530856939856, 'rougeLsum': 0.28630530856939856}\n\nbert= {'precision': 0.7757186979055405, 'recall': 0.7327599436044693, 'f1': 0.7533363491296768}\n\nmover = 0.6147837362634168"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"GPTextSum2_data-wiki_gptextsum2_results\"\n\nrouge= {'rouge1': 0.4600676970614709, 'rouge2': 0.2024089594170197, 'rougeL': 0.28630530856939856, 'rougeLsum': 0.28630530856939856}\n\nbert= {'precision': 0.7757186979055405, 'recall': 0.7327599436044693, 'f1': 0.7533363491296768}\n\nmover = 0.6147837362634168"
]
| [
6,
146
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"GPTextSum2_data-wiki_gptextsum2_results\"\n\nrouge= {'rouge1': 0.4600676970614709, 'rouge2': 0.2024089594170197, 'rougeL': 0.28630530856939856, 'rougeLsum': 0.28630530856939856}\n\nbert= {'precision': 0.7757186979055405, 'recall': 0.7327599436044693, 'f1': 0.7533363491296768}\n\nmover = 0.6147837362634168"
]
|
59c045ff961528ad39c20e006bffea685a1719f7 | # Dataset Card for "eur-lex-europa-merged"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | chemNLP/eur-lex-europa-merged | [
"region:us"
]
| 2023-11-13T16:36:58+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2027022, "num_examples": 75}], "download_size": 1035116, "dataset_size": 2027022}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T16:37:00+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "eur-lex-europa-merged"
More Information needed | [
"# Dataset Card for \"eur-lex-europa-merged\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"eur-lex-europa-merged\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"eur-lex-europa-merged\"\n\nMore Information needed"
]
|
449753cf2e6ae419ab24d85bd8ec19863b158480 | # Dataset Card for "temario_data-wiki_gptextsum2_results"
rouge= {'rouge1': 0.3680164663597422, 'rouge2': 0.14753246828083666, 'rougeL': 0.22311494070849122, 'rougeLsum': 0.22311494070849122}
bert= {'precision': 0.7452675557136536, 'recall': 0.692316381931305, 'f1': 0.7174785995483398}
mover = 0.6042955916135626 | arthurmluz/temario_data-wiki_gptextsum2_results | [
"region:us"
]
| 2023-11-13T16:40:14+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 218254, "num_examples": 25}], "download_size": 174284, "dataset_size": 218254}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T03:26:33+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "temario_data-wiki_gptextsum2_results"
rouge= {'rouge1': 0.3680164663597422, 'rouge2': 0.14753246828083666, 'rougeL': 0.22311494070849122, 'rougeLsum': 0.22311494070849122}
bert= {'precision': 0.7452675557136536, 'recall': 0.692316381931305, 'f1': 0.7174785995483398}
mover = 0.6042955916135626 | [
"# Dataset Card for \"temario_data-wiki_gptextsum2_results\"\n\nrouge= {'rouge1': 0.3680164663597422, 'rouge2': 0.14753246828083666, 'rougeL': 0.22311494070849122, 'rougeLsum': 0.22311494070849122}\n\nbert= {'precision': 0.7452675557136536, 'recall': 0.692316381931305, 'f1': 0.7174785995483398}\n\nmover = 0.6042955916135626"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"temario_data-wiki_gptextsum2_results\"\n\nrouge= {'rouge1': 0.3680164663597422, 'rouge2': 0.14753246828083666, 'rougeL': 0.22311494070849122, 'rougeLsum': 0.22311494070849122}\n\nbert= {'precision': 0.7452675557136536, 'recall': 0.692316381931305, 'f1': 0.7174785995483398}\n\nmover = 0.6042955916135626"
]
| [
6,
137
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"temario_data-wiki_gptextsum2_results\"\n\nrouge= {'rouge1': 0.3680164663597422, 'rouge2': 0.14753246828083666, 'rougeL': 0.22311494070849122, 'rougeLsum': 0.22311494070849122}\n\nbert= {'precision': 0.7452675557136536, 'recall': 0.692316381931305, 'f1': 0.7174785995483398}\n\nmover = 0.6042955916135626"
]
|
54070431b9367a11a85bc19fbb831f959e301a0b | # Dataset Card for "gptextsum2_data-xlsum_results"
rouge= {'rouge1': 0.1392392018794706, 'rouge2': 0.05018310140346884, 'rougeL': 0.09939131774579779, 'rougeLsum': 0.09939131774579779}
bert= {'precision': 0.7323424190282821, 'recall': 0.6123941779136658, 'f1': 0.6667265325784684} | arthurmluz/GPTextSum2_data-xlsum_results | [
"region:us"
]
| 2023-11-13T16:43:59+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 82985, "num_examples": 20}], "download_size": 80083, "dataset_size": 82985}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T04:19:10+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "gptextsum2_data-xlsum_results"
rouge= {'rouge1': 0.1392392018794706, 'rouge2': 0.05018310140346884, 'rougeL': 0.09939131774579779, 'rougeLsum': 0.09939131774579779}
bert= {'precision': 0.7323424190282821, 'recall': 0.6123941779136658, 'f1': 0.6667265325784684} | [
"# Dataset Card for \"gptextsum2_data-xlsum_results\"\n\nrouge= {'rouge1': 0.1392392018794706, 'rouge2': 0.05018310140346884, 'rougeL': 0.09939131774579779, 'rougeLsum': 0.09939131774579779}\n\nbert= {'precision': 0.7323424190282821, 'recall': 0.6123941779136658, 'f1': 0.6667265325784684}"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"gptextsum2_data-xlsum_results\"\n\nrouge= {'rouge1': 0.1392392018794706, 'rouge2': 0.05018310140346884, 'rougeL': 0.09939131774579779, 'rougeLsum': 0.09939131774579779}\n\nbert= {'precision': 0.7323424190282821, 'recall': 0.6123941779136658, 'f1': 0.6667265325784684}"
]
| [
6,
128
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"gptextsum2_data-xlsum_results\"\n\nrouge= {'rouge1': 0.1392392018794706, 'rouge2': 0.05018310140346884, 'rougeL': 0.09939131774579779, 'rougeLsum': 0.09939131774579779}\n\nbert= {'precision': 0.7323424190282821, 'recall': 0.6123941779136658, 'f1': 0.6667265325784684}"
]
|
5442cd1b0fff15776bae06df49310192390d759f | # Dataset Card for "German_invoices_dataset_for_donut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Aoschu/German_invoices_dataset_for_donut | [
"region:us"
]
| 2023-11-13T16:47:36+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14904771.0, "num_examples": 97}, {"name": "validation", "num_bytes": 2313353.0, "num_examples": 14}, {"name": "test", "num_bytes": 3141357.0, "num_examples": 18}], "download_size": 13813118, "dataset_size": 20359481.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-11-13T19:22:53+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "German_invoices_dataset_for_donut"
More Information needed | [
"# Dataset Card for \"German_invoices_dataset_for_donut\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"German_invoices_dataset_for_donut\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"German_invoices_dataset_for_donut\"\n\nMore Information needed"
]
|
b31936a0b6c267e87d373014034cd8fb44ced2fb |
# Dataset Card for the Emu Edit Test Set
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage: https://emu-edit.metademolab.com/**
- **Paper: https://emu-edit.metademolab.com/assets/emu_edit.pdf**
### Dataset Summary
To create a benchmark for image editing we first define seven different categories of potential image editing operations: background alteration (background), comprehensive image changes (global), style alteration (style), object removal (remove), object addition (add), localized modifications (local), and color/texture alterations (texture).
Then, we utilize the diverse set of input images from the [MagicBrush benchmark](https://huggingface.co/datasets/osunlp/MagicBrush), and for each editing operation, we task crowd workers to devise relevant, creative, and challenging instructions.
Moreover, to increase the quality of the collected examples, we apply a post-verification stage, in which crowd workers filter examples with irrelevant instructions.
Finally, to support evaluation for methods that require input and output captions (e.g. prompt2prompt and pnp), we additionally collect an input caption and output caption for each example.
When doing so, we ask annotators to ensure that the captions capture both important elements in the image, and elements that should change based on the instruction.
Additionally, to support proper comparison with Emu Edit with publicly release the model generations on the test set [here](https://huggingface.co/datasets/facebook/emu_edit_test_set_generations).
For more details please see our [paper](https://emu-edit.metademolab.com/assets/emu_edit.pdf) and [project page](https://emu-edit.metademolab.com/).
### Licensing Information
Licensed with CC-BY-NC 4.0 License available [here](https://creativecommons.org/licenses/by-nc/4.0/legalcode?fbclid=IwAR2SYZjLRywwUMblkWg0LyAxHVVTloIFlvC-ju3BthIYtOM2jpQHgbeXOsM).
### Citation Information
```
@inproceedings{Sheynin2023EmuEP,
title={Emu Edit: Precise Image Editing via Recognition and Generation Tasks},
author={Shelly Sheynin and Adam Polyak and Uriel Singer and Yuval Kirstain and Amit Zohar and Oron Ashual and Devi Parikh and Yaniv Taigman},
year={2023},
url={https://api.semanticscholar.org/CorpusID:265221391}
}
``` | facebook/emu_edit_test_set | [
"region:us"
]
| 2023-11-13T16:52:43+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "task", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "idx", "dtype": "int64"}, {"name": "hash", "dtype": "string"}, {"name": "input_caption", "dtype": "string"}, {"name": "output_caption", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 766327032.29, "num_examples": 2022}, {"name": "test", "num_bytes": 1353530752.0, "num_examples": 3589}], "download_size": 1904598290, "dataset_size": 2119857784.29}} | 2023-11-19T07:37:12+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for the Emu Edit Test Set
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Additional Information
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Paper: URL
### Dataset Summary
To create a benchmark for image editing we first define seven different categories of potential image editing operations: background alteration (background), comprehensive image changes (global), style alteration (style), object removal (remove), object addition (add), localized modifications (local), and color/texture alterations (texture).
Then, we utilize the diverse set of input images from the MagicBrush benchmark, and for each editing operation, we task crowd workers to devise relevant, creative, and challenging instructions.
Moreover, to increase the quality of the collected examples, we apply a post-verification stage, in which crowd workers filter examples with irrelevant instructions.
Finally, to support evaluation for methods that require input and output captions (e.g. prompt2prompt and pnp), we additionally collect an input caption and output caption for each example.
When doing so, we ask annotators to ensure that the captions capture both important elements in the image, and elements that should change based on the instruction.
Additionally, to support proper comparison with Emu Edit with publicly release the model generations on the test set here.
For more details please see our paper and project page.
### Licensing Information
Licensed with CC-BY-NC 4.0 License available here.
| [
"# Dataset Card for the Emu Edit Test Set",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Paper: URL",
"### Dataset Summary\n\nTo create a benchmark for image editing we first define seven different categories of potential image editing operations: background alteration (background), comprehensive image changes (global), style alteration (style), object removal (remove), object addition (add), localized modifications (local), and color/texture alterations (texture).\nThen, we utilize the diverse set of input images from the MagicBrush benchmark, and for each editing operation, we task crowd workers to devise relevant, creative, and challenging instructions.\nMoreover, to increase the quality of the collected examples, we apply a post-verification stage, in which crowd workers filter examples with irrelevant instructions.\nFinally, to support evaluation for methods that require input and output captions (e.g. prompt2prompt and pnp), we additionally collect an input caption and output caption for each example. \nWhen doing so, we ask annotators to ensure that the captions capture both important elements in the image, and elements that should change based on the instruction.\nAdditionally, to support proper comparison with Emu Edit with publicly release the model generations on the test set here.\nFor more details please see our paper and project page.",
"### Licensing Information\n\nLicensed with CC-BY-NC 4.0 License available here."
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for the Emu Edit Test Set",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Paper: URL",
"### Dataset Summary\n\nTo create a benchmark for image editing we first define seven different categories of potential image editing operations: background alteration (background), comprehensive image changes (global), style alteration (style), object removal (remove), object addition (add), localized modifications (local), and color/texture alterations (texture).\nThen, we utilize the diverse set of input images from the MagicBrush benchmark, and for each editing operation, we task crowd workers to devise relevant, creative, and challenging instructions.\nMoreover, to increase the quality of the collected examples, we apply a post-verification stage, in which crowd workers filter examples with irrelevant instructions.\nFinally, to support evaluation for methods that require input and output captions (e.g. prompt2prompt and pnp), we additionally collect an input caption and output caption for each example. \nWhen doing so, we ask annotators to ensure that the captions capture both important elements in the image, and elements that should change based on the instruction.\nAdditionally, to support proper comparison with Emu Edit with publicly release the model generations on the test set here.\nFor more details please see our paper and project page.",
"### Licensing Information\n\nLicensed with CC-BY-NC 4.0 License available here."
]
| [
6,
11,
33,
12,
267,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for the Emu Edit Test Set## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Additional Information\n - Licensing Information\n - Citation Information## Dataset Description\n\n- Homepage: URL\n- Paper: URL### Dataset Summary\n\nTo create a benchmark for image editing we first define seven different categories of potential image editing operations: background alteration (background), comprehensive image changes (global), style alteration (style), object removal (remove), object addition (add), localized modifications (local), and color/texture alterations (texture).\nThen, we utilize the diverse set of input images from the MagicBrush benchmark, and for each editing operation, we task crowd workers to devise relevant, creative, and challenging instructions.\nMoreover, to increase the quality of the collected examples, we apply a post-verification stage, in which crowd workers filter examples with irrelevant instructions.\nFinally, to support evaluation for methods that require input and output captions (e.g. prompt2prompt and pnp), we additionally collect an input caption and output caption for each example. \nWhen doing so, we ask annotators to ensure that the captions capture both important elements in the image, and elements that should change based on the instruction.\nAdditionally, to support proper comparison with Emu Edit with publicly release the model generations on the test set here.\nFor more details please see our paper and project page.### Licensing Information\n\nLicensed with CC-BY-NC 4.0 License available here."
]
|
f49809354950863775a122f2abff996b23143599 | # Dataset Card for the Emu Edit Generations on Emu Edit Test Set
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage: https://emu-edit.metademolab.com/**
- **Paper: https://emu-edit.metademolab.com/assets/emu_edit.pdf**
### Dataset Summary
This dataset contains Emu Edit's generations on the [Emu Edit test set](https://huggingface.co/datasets/facebook/emu_edit_test_set). For more information please read our [paper](https://emu-edit.metademolab.com/assets/emu_edit.pdf) or visit our [homepage](https://emu-edit.metademolab.com/).
### Licensing Information
Licensed with CC-BY-NC 4.0 License available [here](https://creativecommons.org/licenses/by-nc/4.0/legalcode?fbclid=IwAR2SYZjLRywwUMblkWg0LyAxHVVTloIFlvC-ju3BthIYtOM2jpQHgbeXOsM).
### Citation Information
```
@inproceedings{Sheynin2023EmuEP,
title={Emu Edit: Precise Image Editing via Recognition and Generation Tasks},
author={Shelly Sheynin and Adam Polyak and Uriel Singer and Yuval Kirstain and Amit Zohar and Oron Ashual and Devi Parikh and Yaniv Taigman},
year={2023},
url={https://api.semanticscholar.org/CorpusID:265221391}
}
``` | facebook/emu_edit_test_set_generations | [
"region:us"
]
| 2023-11-13T16:53:50+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "task", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "idx", "dtype": "int64"}, {"name": "hash", "dtype": "string"}, {"name": "input_caption", "dtype": "string"}, {"name": "output_caption", "dtype": "string"}, {"name": "edited_image", "dtype": "image"}, {"name": "model", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 2739856920.3, "num_examples": 3589}, {"name": "test", "num_bytes": 1538831841.4, "num_examples": 2022}], "download_size": 1424484094, "dataset_size": 4278688761.7000003}} | 2023-11-19T07:36:56+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for the Emu Edit Generations on Emu Edit Test Set
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Additional Information
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Paper: URL
### Dataset Summary
This dataset contains Emu Edit's generations on the Emu Edit test set. For more information please read our paper or visit our homepage.
### Licensing Information
Licensed with CC-BY-NC 4.0 License available here.
| [
"# Dataset Card for the Emu Edit Generations on Emu Edit Test Set",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Paper: URL",
"### Dataset Summary\n\nThis dataset contains Emu Edit's generations on the Emu Edit test set. For more information please read our paper or visit our homepage.",
"### Licensing Information\n\nLicensed with CC-BY-NC 4.0 License available here."
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for the Emu Edit Generations on Emu Edit Test Set",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Paper: URL",
"### Dataset Summary\n\nThis dataset contains Emu Edit's generations on the Emu Edit test set. For more information please read our paper or visit our homepage.",
"### Licensing Information\n\nLicensed with CC-BY-NC 4.0 License available here."
]
| [
6,
17,
33,
12,
39,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for the Emu Edit Generations on Emu Edit Test Set## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Additional Information\n - Licensing Information\n - Citation Information## Dataset Description\n\n- Homepage: URL\n- Paper: URL### Dataset Summary\n\nThis dataset contains Emu Edit's generations on the Emu Edit test set. For more information please read our paper or visit our homepage.### Licensing Information\n\nLicensed with CC-BY-NC 4.0 License available here."
]
|
c88c21172ba937e0087987b1cfb8dba0823c15e1 | # Dataset Card for "mini_pubmed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | aryopg/mini_pubmed | [
"region:us"
]
| 2023-11-13T16:54:43+00:00 | {"dataset_info": {"features": [{"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13296774, "num_examples": 10000}], "download_size": 7578772, "dataset_size": 13296774}} | 2023-11-13T16:56:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mini_pubmed"
More Information needed | [
"# Dataset Card for \"mini_pubmed\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mini_pubmed\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mini_pubmed\"\n\nMore Information needed"
]
|
f6a2002a5b38cbb97c23f826c5c418b4b23b704d | # Dataset Card for "gptextsum2_data-xlsum_temario_results"
rouge= {'rouge1': 0.34663019021874986, 'rouge2': 0.14819749362220133, 'rougeL': 0.21196170584218882, 'rougeLsum': 0.21196170584218882}
bert= {'precision': 0.7504127502441407, 'recall': 0.6941693127155304, 'f1': 0.720111683011055}
mover = 0.5858268677961962 | arthurmluz/GPTextSum2_data-xlsum_temario_results | [
"region:us"
]
| 2023-11-13T16:55:31+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 92617, "num_examples": 20}], "download_size": 93095, "dataset_size": 92617}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T04:21:56+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "gptextsum2_data-xlsum_temario_results"
rouge= {'rouge1': 0.34663019021874986, 'rouge2': 0.14819749362220133, 'rougeL': 0.21196170584218882, 'rougeLsum': 0.21196170584218882}
bert= {'precision': 0.7504127502441407, 'recall': 0.6941693127155304, 'f1': 0.720111683011055}
mover = 0.5858268677961962 | [
"# Dataset Card for \"gptextsum2_data-xlsum_temario_results\"\n\nrouge= {'rouge1': 0.34663019021874986, 'rouge2': 0.14819749362220133, 'rougeL': 0.21196170584218882, 'rougeLsum': 0.21196170584218882}\n\nbert= {'precision': 0.7504127502441407, 'recall': 0.6941693127155304, 'f1': 0.720111683011055}\n\nmover = 0.5858268677961962"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"gptextsum2_data-xlsum_temario_results\"\n\nrouge= {'rouge1': 0.34663019021874986, 'rouge2': 0.14819749362220133, 'rougeL': 0.21196170584218882, 'rougeLsum': 0.21196170584218882}\n\nbert= {'precision': 0.7504127502441407, 'recall': 0.6941693127155304, 'f1': 0.720111683011055}\n\nmover = 0.5858268677961962"
]
| [
6,
140
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"gptextsum2_data-xlsum_temario_results\"\n\nrouge= {'rouge1': 0.34663019021874986, 'rouge2': 0.14819749362220133, 'rougeL': 0.21196170584218882, 'rougeLsum': 0.21196170584218882}\n\nbert= {'precision': 0.7504127502441407, 'recall': 0.6941693127155304, 'f1': 0.720111683011055}\n\nmover = 0.5858268677961962"
]
|
7b6dc5ed9e9eaedcf0c89706a10523e54368b2b8 | # Dataset Card for "gptextsum2_data-xlsum_cstnews_results"
rouge= {'rouge1': 0.38748303853813626, 'rouge2': 0.18195048965428265, 'rougeL': 0.24222310213649534, 'rougeLsum': 0.24222310213649534}
bert= {'precision': 0.7680569976568222, 'recall': 0.7077599495649338, 'f1': 0.7364159941673278}
mover = 0.630959465845996 | arthurmluz/GPTextSum2_data-xlsum_cstnews_results | [
"region:us"
]
| 2023-11-13T16:57:26+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 90784, "num_examples": 20}], "download_size": 88781, "dataset_size": 90784}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T04:19:19+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "gptextsum2_data-xlsum_cstnews_results"
rouge= {'rouge1': 0.38748303853813626, 'rouge2': 0.18195048965428265, 'rougeL': 0.24222310213649534, 'rougeLsum': 0.24222310213649534}
bert= {'precision': 0.7680569976568222, 'recall': 0.7077599495649338, 'f1': 0.7364159941673278}
mover = 0.630959465845996 | [
"# Dataset Card for \"gptextsum2_data-xlsum_cstnews_results\"\n\nrouge= {'rouge1': 0.38748303853813626, 'rouge2': 0.18195048965428265, 'rougeL': 0.24222310213649534, 'rougeLsum': 0.24222310213649534}\n\nbert= {'precision': 0.7680569976568222, 'recall': 0.7077599495649338, 'f1': 0.7364159941673278}\n\nmover = 0.630959465845996"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"gptextsum2_data-xlsum_cstnews_results\"\n\nrouge= {'rouge1': 0.38748303853813626, 'rouge2': 0.18195048965428265, 'rougeL': 0.24222310213649534, 'rougeLsum': 0.24222310213649534}\n\nbert= {'precision': 0.7680569976568222, 'recall': 0.7077599495649338, 'f1': 0.7364159941673278}\n\nmover = 0.630959465845996"
]
| [
6,
145
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"gptextsum2_data-xlsum_cstnews_results\"\n\nrouge= {'rouge1': 0.38748303853813626, 'rouge2': 0.18195048965428265, 'rougeL': 0.24222310213649534, 'rougeLsum': 0.24222310213649534}\n\nbert= {'precision': 0.7680569976568222, 'recall': 0.7077599495649338, 'f1': 0.7364159941673278}\n\nmover = 0.630959465845996"
]
|
34753ecbd0eb38b78e0e99b730fafc47e2d420b3 | # Dataset Card for "gptextsum2_data-xlsum_cstnews_1024_results"
rouge= {'rouge1': 0.39418346930184295, 'rouge2': 0.17965035175767424, 'rougeL': 0.2455202016037282, 'rougeLsum': 0.2455202016037282}
bert= {'precision': 0.7633351445198059, 'recall': 0.7100760132074356, 'f1': 0.7354371815919876}
mover 0.6302502833672502 | arthurmluz/GPTextSum2_data-xlsum_cstnews_1024_results | [
"region:us"
]
| 2023-11-13T16:58:46+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 91939, "num_examples": 20}], "download_size": 89878, "dataset_size": 91939}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T04:19:28+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "gptextsum2_data-xlsum_cstnews_1024_results"
rouge= {'rouge1': 0.39418346930184295, 'rouge2': 0.17965035175767424, 'rougeL': 0.2455202016037282, 'rougeLsum': 0.2455202016037282}
bert= {'precision': 0.7633351445198059, 'recall': 0.7100760132074356, 'f1': 0.7354371815919876}
mover 0.6302502833672502 | [
"# Dataset Card for \"gptextsum2_data-xlsum_cstnews_1024_results\"\n\nrouge= {'rouge1': 0.39418346930184295, 'rouge2': 0.17965035175767424, 'rougeL': 0.2455202016037282, 'rougeLsum': 0.2455202016037282}\n\nbert= {'precision': 0.7633351445198059, 'recall': 0.7100760132074356, 'f1': 0.7354371815919876}\n\nmover 0.6302502833672502"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"gptextsum2_data-xlsum_cstnews_1024_results\"\n\nrouge= {'rouge1': 0.39418346930184295, 'rouge2': 0.17965035175767424, 'rougeL': 0.2455202016037282, 'rougeLsum': 0.2455202016037282}\n\nbert= {'precision': 0.7633351445198059, 'recall': 0.7100760132074356, 'f1': 0.7354371815919876}\n\nmover 0.6302502833672502"
]
| [
6,
145
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"gptextsum2_data-xlsum_cstnews_1024_results\"\n\nrouge= {'rouge1': 0.39418346930184295, 'rouge2': 0.17965035175767424, 'rougeL': 0.2455202016037282, 'rougeLsum': 0.2455202016037282}\n\nbert= {'precision': 0.7633351445198059, 'recall': 0.7100760132074356, 'f1': 0.7354371815919876}\n\nmover 0.6302502833672502"
]
|
f44ff7383ebe7918250e74e4e6460a6153a778fd | # Dataset Card for "gptextsum2_data-xlsum_gptextsum2_results"
rouge= {'rouge1': 0.4732098982865069, 'rouge2': 0.2074511837452419, 'rougeL': 0.2680283249468459, 'rougeLsum': 0.2680283249468459}
bert= {'precision': 0.7702845484018326, 'recall': 0.7441302567720414, 'f1': 0.7565477162599563}
mover = 0.6178455094647509 | arthurmluz/GPTextSum2_data-xlsum_gptextsum2_results | [
"region:us"
]
| 2023-11-13T17:00:34+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 96886, "num_examples": 20}], "download_size": 95912, "dataset_size": 96886}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T04:20:22+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "gptextsum2_data-xlsum_gptextsum2_results"
rouge= {'rouge1': 0.4732098982865069, 'rouge2': 0.2074511837452419, 'rougeL': 0.2680283249468459, 'rougeLsum': 0.2680283249468459}
bert= {'precision': 0.7702845484018326, 'recall': 0.7441302567720414, 'f1': 0.7565477162599563}
mover = 0.6178455094647509 | [
"# Dataset Card for \"gptextsum2_data-xlsum_gptextsum2_results\"\n\nrouge= {'rouge1': 0.4732098982865069, 'rouge2': 0.2074511837452419, 'rougeL': 0.2680283249468459, 'rougeLsum': 0.2680283249468459}\n\nbert= {'precision': 0.7702845484018326, 'recall': 0.7441302567720414, 'f1': 0.7565477162599563}\n\nmover = 0.6178455094647509"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"gptextsum2_data-xlsum_gptextsum2_results\"\n\nrouge= {'rouge1': 0.4732098982865069, 'rouge2': 0.2074511837452419, 'rougeL': 0.2680283249468459, 'rougeLsum': 0.2680283249468459}\n\nbert= {'precision': 0.7702845484018326, 'recall': 0.7441302567720414, 'f1': 0.7565477162599563}\n\nmover = 0.6178455094647509"
]
| [
6,
145
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"gptextsum2_data-xlsum_gptextsum2_results\"\n\nrouge= {'rouge1': 0.4732098982865069, 'rouge2': 0.2074511837452419, 'rougeL': 0.2680283249468459, 'rougeLsum': 0.2680283249468459}\n\nbert= {'precision': 0.7702845484018326, 'recall': 0.7441302567720414, 'f1': 0.7565477162599563}\n\nmover = 0.6178455094647509"
]
|
8eab32a9474f79269b37aa76deae4add8766f7e9 | # Dataset Card for "cstnews_data-xlsum_gptextsum2_results"
rouge= {'rouge1': 0.5251493615673016, 'rouge2': 0.2936121215948489, 'rougeL': 0.35087788149320814, 'rougeLsum': 0.35087788149320814}
bert= {'precision': 0.7674689218401909, 'recall': 0.8024204447865486, 'f1': 0.7838323190808296}
mover = 0.6346333578747139 | arthurmluz/cstnews_data-xlsum_gptextsum2_results | [
"region:us"
]
| 2023-11-13T17:02:03+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 59919, "num_examples": 16}], "download_size": 59830, "dataset_size": 59919}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T03:54:35+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "cstnews_data-xlsum_gptextsum2_results"
rouge= {'rouge1': 0.5251493615673016, 'rouge2': 0.2936121215948489, 'rougeL': 0.35087788149320814, 'rougeLsum': 0.35087788149320814}
bert= {'precision': 0.7674689218401909, 'recall': 0.8024204447865486, 'f1': 0.7838323190808296}
mover = 0.6346333578747139 | [
"# Dataset Card for \"cstnews_data-xlsum_gptextsum2_results\"\n\nrouge= {'rouge1': 0.5251493615673016, 'rouge2': 0.2936121215948489, 'rougeL': 0.35087788149320814, 'rougeLsum': 0.35087788149320814}\n\nbert= {'precision': 0.7674689218401909, 'recall': 0.8024204447865486, 'f1': 0.7838323190808296}\n\nmover = 0.6346333578747139"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"cstnews_data-xlsum_gptextsum2_results\"\n\nrouge= {'rouge1': 0.5251493615673016, 'rouge2': 0.2936121215948489, 'rougeL': 0.35087788149320814, 'rougeLsum': 0.35087788149320814}\n\nbert= {'precision': 0.7674689218401909, 'recall': 0.8024204447865486, 'f1': 0.7838323190808296}\n\nmover = 0.6346333578747139"
]
| [
6,
144
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"cstnews_data-xlsum_gptextsum2_results\"\n\nrouge= {'rouge1': 0.5251493615673016, 'rouge2': 0.2936121215948489, 'rougeL': 0.35087788149320814, 'rougeLsum': 0.35087788149320814}\n\nbert= {'precision': 0.7674689218401909, 'recall': 0.8024204447865486, 'f1': 0.7838323190808296}\n\nmover = 0.6346333578747139"
]
|
034afb3b927e372cf925ebc1b18538f619929074 | # Dataset Card for "temario_data-xlsum_gptextsum2_results"
rouge= {'rouge1': 0.41527671599026306, 'rouge2': 0.15216375743250313, 'rougeL': 0.2336496143136067, 'rougeLsum': 0.2336496143136067}
bert= {'precision': 0.7253225016593933, 'recall': 0.7107182025909424, 'f1': 0.7176165866851807}
mover = 0.6200280069222645
| arthurmluz/temario_data-xlsum_gptextsum2_results | [
"region:us"
]
| 2023-11-13T17:04:23+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 228776, "num_examples": 25}], "download_size": 181580, "dataset_size": 228776}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-15T03:31:24+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "temario_data-xlsum_gptextsum2_results"
rouge= {'rouge1': 0.41527671599026306, 'rouge2': 0.15216375743250313, 'rougeL': 0.2336496143136067, 'rougeLsum': 0.2336496143136067}
bert= {'precision': 0.7253225016593933, 'recall': 0.7107182025909424, 'f1': 0.7176165866851807}
mover = 0.6200280069222645
| [
"# Dataset Card for \"temario_data-xlsum_gptextsum2_results\"\n\nrouge= {'rouge1': 0.41527671599026306, 'rouge2': 0.15216375743250313, 'rougeL': 0.2336496143136067, 'rougeLsum': 0.2336496143136067}\n\nbert= {'precision': 0.7253225016593933, 'recall': 0.7107182025909424, 'f1': 0.7176165866851807}\n\nmover = 0.6200280069222645"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"temario_data-xlsum_gptextsum2_results\"\n\nrouge= {'rouge1': 0.41527671599026306, 'rouge2': 0.15216375743250313, 'rougeL': 0.2336496143136067, 'rougeLsum': 0.2336496143136067}\n\nbert= {'precision': 0.7253225016593933, 'recall': 0.7107182025909424, 'f1': 0.7176165866851807}\n\nmover = 0.6200280069222645"
]
| [
6,
142
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"temario_data-xlsum_gptextsum2_results\"\n\nrouge= {'rouge1': 0.41527671599026306, 'rouge2': 0.15216375743250313, 'rougeL': 0.2336496143136067, 'rougeLsum': 0.2336496143136067}\n\nbert= {'precision': 0.7253225016593933, 'recall': 0.7107182025909424, 'f1': 0.7176165866851807}\n\nmover = 0.6200280069222645"
]
|
dbc8d3fca4a725ea2e8f09f92736f9b42ea9c01b | # Dataset Card for "ddr_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | marcelittle/ddr_images | [
"region:us"
]
| 2023-11-13T17:16:23+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 24429708.0, "num_examples": 383}], "download_size": 24431333, "dataset_size": 24429708.0}} | 2023-11-15T01:19:49+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ddr_images"
More Information needed | [
"# Dataset Card for \"ddr_images\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ddr_images\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ddr_images\"\n\nMore Information needed"
]
|
ad5b9be469f3356880d9f37b15c6d7ce2f20d10f | # Dataset Card for ReDial - PTBR
- **Original dataset:** [Redial Huggingface](https://huggingface.co/datasets/re_dial)
- **Homepage:** [ReDial Dataset](https://redialdata.github.io/website/)
- **Repository:** [ReDialData](https://github.com/ReDialData/website/tree/data)
- **Paper:** [Towards Deep Conversational Recommendations](https://proceedings.neurips.cc/paper/2018/file/800de15c79c8d840f4e78d3af937d4d4-Paper.pdf)
### Dataset Summary
The ReDial (Recommendation Dialogues) PTBR dataset is an annotated collection of dialogues where users recommend movies to each other translated to brazilian portuguese.
The adapted version of this dataset in Brazilian Portuguese was translated by the [Maritalk](https://www.maritaca.ai/). This translated version opens up opportunities fo research at the intersection of goal-directed dialogue systems (such as restaurant recommendations) and free-form, colloquial dialogue systems.
Some samples from the original dataset have been removed as we've reached the usage limit in Maritalk. Consequently, the training set has been reduced by nearly 10%.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English and Portuguese.
## Dataset Structure
### Data Instances
```
{
"conversationId": 391,
"messages": [
{
"messageId": 1021,
"senderWorkerId": 0,
"text": "Hi there, how are you? I\'m looking for movie recommendations",
"timeOffset": 0
},
{
"messageId": 1022,
"senderWorkerId": 1,
"text": "I am doing okay. What kind of movies do you like?",
"timeOffset": 15
},
{
"messageId": 1023,
"senderWorkerId": 0,
"text": "I like animations like @84779 and @191602",
"timeOffset": 66
},
{
"messageId": 1024,
"senderWorkerId": 0,
"text": "I also enjoy @122159",
"timeOffset": 86
},
{
"messageId": 1025,
"senderWorkerId": 0,
"text": "Anything artistic",
"timeOffset": 95
},
{
"messageId": 1026,
"senderWorkerId": 1,
"text": "You might like @165710 that was a good movie.",
"timeOffset": 135
},
{
"messageId": 1027,
"senderWorkerId": 0,
"text": "What\'s it about?",
"timeOffset": 151
},
{
"messageId": 1028,
"senderWorkerId": 1,
"text": "It has Alec Baldwin it is about a baby that works for a company and gets adopted it is very funny",
"timeOffset": 207
},
{
"messageId": 1029,
"senderWorkerId": 0,
"text": "That seems like a nice comedy",
"timeOffset": 238
},
{
"messageId": 1030,
"senderWorkerId": 0,
"text": "Do you have any animated recommendations that are a bit more dramatic? Like @151313 for example",
"timeOffset": 272
},
{
"messageId": 1031,
"senderWorkerId": 0,
"text": "I like comedies but I prefer films with a little more depth",
"timeOffset": 327
},
{
"messageId": 1032,
"senderWorkerId": 1,
"text": "That is a tough one but I will remember something",
"timeOffset": 467
},
{
"messageId": 1033,
"senderWorkerId": 1,
"text": "@203371 was a good one",
"timeOffset": 509
},
{
"messageId": 1034,
"senderWorkerId": 0,
"text": "Ooh that seems cool! Thanks for the input. I\'m ready to submit if you are.",
"timeOffset": 564
},
{
"messageId": 1035,
"senderWorkerId": 1,
"text": "It is animated, sci fi, and has action",
"timeOffset": 571
},
{
"messageId": 1036,
"senderWorkerId": 1,
"text": "Glad I could help",
"timeOffset": 579
},
{
"messageId": 1037,
"senderWorkerId": 0,
"text": "Nice",
"timeOffset": 581
},
{
"messageId": 1038,
"senderWorkerId": 0,
"text": "Take care, cheers!",
"timeOffset": 591
},
{
"messageId": 1039,
"senderWorkerId": 1,
"text": "bye",
"timeOffset": 608
}
],
"messages_translated": [
{
"messageId": 1021,
"senderWorkerId": 0,
"text": "Olá, como você está? Estou procurando recomendações de filmes.",
"timeOffset": 0
},
{
"messageId": 1022,
"senderWorkerId": 1,
"text": "Eu estou indo bem. Qual tipo de filmes você gosta?",
"timeOffset": 15
},
{
"messageId": 1023,
"senderWorkerId": 0,
"text": "Eu gosto de animações como @84779 e @191602.",
"timeOffset": 66
},
{
"messageId": 1024,
"senderWorkerId": 0,
"text": "Eu também gosto de @122159.",
"timeOffset": 86
},
{
"messageId": 1025,
"senderWorkerId": 0,
"text": "Qualquer coisa artística",
"timeOffset": 95
},
{
"messageId": 1026,
"senderWorkerId": 1,
"text": "Você pode gostar de saber que foi um bom filme.",
"timeOffset": 135
},
{
"messageId": 1027,
"senderWorkerId": 0,
"text": "O que é isso?",
"timeOffset": 151
},
{
"messageId": 1028,
"senderWorkerId": 1,
"text": "Tem um bebê que trabalha para uma empresa e é adotado. É muito engraçado.",
"timeOffset": 207
},
{
"messageId": 1029,
"senderWorkerId": 0,
"text": "Isso parece ser uma comédia legal.",
"timeOffset": 238
},
{
"messageId": 1030,
"senderWorkerId": 0,
"text": "Você tem alguma recomendação animada que seja um pouco mais dramática, como por exemplo @151313?",
"timeOffset": 272
},
{
"messageId": 1031,
"senderWorkerId": 0,
"text": "Eu gosto de comédias, mas prefiro filmes com um pouco mais de profundidade.",
"timeOffset": 327
},
{
"messageId": 1032,
"senderWorkerId": 1,
"text": "Isso é um desafio, mas eu me lembrarei de algo.",
"timeOffset": 467
},
{
"messageId": 1033,
"senderWorkerId": 1,
"text": "@203371 Foi um bom dia.",
"timeOffset": 509
},
{
"messageId": 1034,
"senderWorkerId": 0,
"text": "Ah, parece legal! Obrigado pela contribuição. Estou pronto para enviar se você estiver.",
"timeOffset": 564
},
{
"messageId": 1035,
"senderWorkerId": 1,
"text": "É animado, de ficção científica e tem ação.",
"timeOffset": 571
},
{
"messageId": 1036,
"senderWorkerId": 1,
"text": "Fico feliz em poder ajudar.",
"timeOffset": 579
},
{
"messageId": 1037,
"senderWorkerId": 0,
"text": "Legal",
"timeOffset": 581
},
{
"messageId": 1038,
"senderWorkerId": 0,
"text": "Cuide-se, abraços!",
"timeOffset": 591
},
{
"messageId": 1039,
"senderWorkerId": 1,
"text": "Adeus",
"timeOffset": 608
}
],
"movieMentions": [
{
"movieId": "203371",
"movieName": "Final Fantasy: The Spirits Within (2001)"
},
{
"movieId": "84779",
"movieName": "The Triplets of Belleville (2003)"
},
{
"movieId": "122159",
"movieName": "Mary and Max (2009)"
},
{
"movieId": "151313",
"movieName": "A Scanner Darkly (2006)"
},
{
"movieId": "191602",
"movieName": "Waking Life (2001)"
},
{
"movieId": "165710",
"movieName": "The Boss Baby (2017)"
}
],
"respondentQuestions": [
{
"liked": 1,
"movieId": "203371",
"seen": 0,
"suggested": 1
},
{
"liked": 1,
"movieId": "84779",
"seen": 1,
"suggested": 0
},
{
"liked": 1,
"movieId": "122159",
"seen": 1,
"suggested": 0
},
{
"liked": 1,
"movieId": "151313",
"seen": 1,
"suggested": 0
},
{
"liked": 1,
"movieId": "191602",
"seen": 1,
"suggested": 0
},
{
"liked": 1,
"movieId": "165710",
"seen": 0,
"suggested": 1
}
],
"respondentWorkerId": 1,
"initiatorWorkerId": 0,
"initiatorQuestions": [
{
"liked": 1,
"movieId": "203371",
"seen": 0,
"suggested": 1
},
{
"liked": 1,
"movieId": "84779",
"seen": 1,
"suggested": 0
},
{
"liked": 1,
"movieId": "122159",
"seen": 1,
"suggested": 0
},
{
"liked": 1,
"movieId": "151313",
"seen": 1,
"suggested": 0
},
{
"liked": 1,
"movieId": "191602",
"seen": 1,
"suggested": 0
},
{
"liked": 1,
"movieId": "165710",
"seen": 0,
"suggested": 1
}
]
}
```
### Data Fields
The dataset is published in the “jsonl” format, i.e., as a text file where each line corresponds to a Dialogue given as a valid JSON document.
A Dialogue contains these fields:
**conversationId:** an integer
**initiatorWorkerId:** an integer identifying to the worker initiating the conversation (the recommendation seeker)
**respondentWorkerId:** an integer identifying the worker responding to the initiator (the recommender)
**messages:** a list of Message objects
**messages_translated:** a list of Message objects
**movieMentions:** a dict mapping movie IDs mentioned in this dialogue to movie names
**initiatorQuestions:** a dictionary mapping movie IDs to the labels supplied by the initiator. Each label is a bool corresponding to whether the initiator has said he saw the movie, liked it, or suggested it.
**respondentQuestions:** a dictionary mapping movie IDs to the labels supplied by the respondent. Each label is a bool corresponding to whether the initiator has said he saw the movie, liked it, or suggested it.
Each Message of **messages** contains these fields:
**messageId:** a unique ID for this message
**text:** a string with the actual message. The string may contain a token starting with @ followed by an integer. This is a movie ID which can be looked up in the movieMentions field of the Dialogue object.
**timeOffset:** time since start of dialogue in seconds
**senderWorkerId:** the ID of the worker sending the message, either initiatorWorkerId or respondentWorkerId.
Each Message of **messages_translated** contains the same struct with the text translated to portuguese.
The labels in initiatorQuestions and respondentQuestions have the following meaning:
*suggested:* 0 if it was mentioned by the seeker, 1 if it was a suggestion from the recommender
*seen:* 0 if the seeker has not seen the movie, 1 if they have seen it, 2 if they did not say
*liked:* 0 if the seeker did not like the movie, 1 if they liked it, 2 if they did not say
### Data Splits
The original dataset contains a total of 11348 dialogues, 10006 for training and model selection, and 1342 for testing.
This translated version has near values but 10% reduced in train split.
### Contributions
This work have has done by [matheusrdg](https://github.com/matheusrdg) and [wfco](https://github.com/willianfco).
The translation of this dataset was made possible thanks to the Maritalk API.
| matheusrdgsf/re_dial_ptbr | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"task_categories:conversational",
"task_categories:translation",
"size_categories:10K<n<100K",
"language:pt",
"language:en",
"license:mit",
"conversational recommendation",
"recommendation",
"conversational",
"region:us"
]
| 2023-11-13T17:20:04+00:00 | {"language": ["pt", "en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "text2text-generation", "conversational", "translation"], "pretty_name": "ReDial (Recommendation Dialogues) PTBR", "dataset_info": {"features": [{"name": "conversationId", "dtype": "int32"}, {"name": "messages", "list": [{"name": "messageId", "dtype": "int64"}, {"name": "senderWorkerId", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "timeOffset", "dtype": "int64"}]}, {"name": "messages_translated", "list": [{"name": "messageId", "dtype": "int64"}, {"name": "senderWorkerId", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "timeOffset", "dtype": "int64"}]}, {"name": "movieMentions", "list": [{"name": "movieId", "dtype": "string"}, {"name": "movieName", "dtype": "string"}]}, {"name": "respondentQuestions", "list": [{"name": "liked", "dtype": "int64"}, {"name": "movieId", "dtype": "string"}, {"name": "seen", "dtype": "int64"}, {"name": "suggested", "dtype": "int64"}]}, {"name": "respondentWorkerId", "dtype": "int32"}, {"name": "initiatorWorkerId", "dtype": "int32"}, {"name": "initiatorQuestions", "list": [{"name": "liked", "dtype": "int64"}, {"name": "movieId", "dtype": "string"}, {"name": "seen", "dtype": "int64"}, {"name": "suggested", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 26389658, "num_examples": 9005}, {"name": "test", "num_bytes": 3755474, "num_examples": 1342}], "download_size": 11072939, "dataset_size": 30145132}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "tags": ["conversational recommendation", "recommendation", "conversational"]} | 2023-11-13T18:14:31+00:00 | []
| [
"pt",
"en"
]
| TAGS
#task_categories-text-classification #task_categories-text2text-generation #task_categories-conversational #task_categories-translation #size_categories-10K<n<100K #language-Portuguese #language-English #license-mit #conversational recommendation #recommendation #conversational #region-us
| # Dataset Card for ReDial - PTBR
- Original dataset: Redial Huggingface
- Homepage: ReDial Dataset
- Repository: ReDialData
- Paper: Towards Deep Conversational Recommendations
### Dataset Summary
The ReDial (Recommendation Dialogues) PTBR dataset is an annotated collection of dialogues where users recommend movies to each other translated to brazilian portuguese.
The adapted version of this dataset in Brazilian Portuguese was translated by the Maritalk. This translated version opens up opportunities fo research at the intersection of goal-directed dialogue systems (such as restaurant recommendations) and free-form, colloquial dialogue systems.
Some samples from the original dataset have been removed as we've reached the usage limit in Maritalk. Consequently, the training set has been reduced by nearly 10%.
### Supported Tasks and Leaderboards
### Languages
English and Portuguese.
## Dataset Structure
### Data Instances
### Data Fields
The dataset is published in the “jsonl” format, i.e., as a text file where each line corresponds to a Dialogue given as a valid JSON document.
A Dialogue contains these fields:
conversationId: an integer
initiatorWorkerId: an integer identifying to the worker initiating the conversation (the recommendation seeker)
respondentWorkerId: an integer identifying the worker responding to the initiator (the recommender)
messages: a list of Message objects
messages_translated: a list of Message objects
movieMentions: a dict mapping movie IDs mentioned in this dialogue to movie names
initiatorQuestions: a dictionary mapping movie IDs to the labels supplied by the initiator. Each label is a bool corresponding to whether the initiator has said he saw the movie, liked it, or suggested it.
respondentQuestions: a dictionary mapping movie IDs to the labels supplied by the respondent. Each label is a bool corresponding to whether the initiator has said he saw the movie, liked it, or suggested it.
Each Message of messages contains these fields:
messageId: a unique ID for this message
text: a string with the actual message. The string may contain a token starting with @ followed by an integer. This is a movie ID which can be looked up in the movieMentions field of the Dialogue object.
timeOffset: time since start of dialogue in seconds
senderWorkerId: the ID of the worker sending the message, either initiatorWorkerId or respondentWorkerId.
Each Message of messages_translated contains the same struct with the text translated to portuguese.
The labels in initiatorQuestions and respondentQuestions have the following meaning:
*suggested:* 0 if it was mentioned by the seeker, 1 if it was a suggestion from the recommender
*seen:* 0 if the seeker has not seen the movie, 1 if they have seen it, 2 if they did not say
*liked:* 0 if the seeker did not like the movie, 1 if they liked it, 2 if they did not say
### Data Splits
The original dataset contains a total of 11348 dialogues, 10006 for training and model selection, and 1342 for testing.
This translated version has near values but 10% reduced in train split.
### Contributions
This work have has done by matheusrdg and wfco.
The translation of this dataset was made possible thanks to the Maritalk API.
| [
"# Dataset Card for ReDial - PTBR\n\n- Original dataset: Redial Huggingface\n- Homepage: ReDial Dataset\n- Repository: ReDialData\n- Paper: Towards Deep Conversational Recommendations",
"### Dataset Summary\n\nThe ReDial (Recommendation Dialogues) PTBR dataset is an annotated collection of dialogues where users recommend movies to each other translated to brazilian portuguese.\n\nThe adapted version of this dataset in Brazilian Portuguese was translated by the Maritalk. This translated version opens up opportunities fo research at the intersection of goal-directed dialogue systems (such as restaurant recommendations) and free-form, colloquial dialogue systems.\n\nSome samples from the original dataset have been removed as we've reached the usage limit in Maritalk. Consequently, the training set has been reduced by nearly 10%.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nEnglish and Portuguese.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\nThe dataset is published in the “jsonl” format, i.e., as a text file where each line corresponds to a Dialogue given as a valid JSON document.\n\nA Dialogue contains these fields:\n\nconversationId: an integer\ninitiatorWorkerId: an integer identifying to the worker initiating the conversation (the recommendation seeker)\nrespondentWorkerId: an integer identifying the worker responding to the initiator (the recommender)\nmessages: a list of Message objects\nmessages_translated: a list of Message objects\nmovieMentions: a dict mapping movie IDs mentioned in this dialogue to movie names\ninitiatorQuestions: a dictionary mapping movie IDs to the labels supplied by the initiator. Each label is a bool corresponding to whether the initiator has said he saw the movie, liked it, or suggested it.\nrespondentQuestions: a dictionary mapping movie IDs to the labels supplied by the respondent. Each label is a bool corresponding to whether the initiator has said he saw the movie, liked it, or suggested it.\nEach Message of messages contains these fields:\n\nmessageId: a unique ID for this message\ntext: a string with the actual message. The string may contain a token starting with @ followed by an integer. This is a movie ID which can be looked up in the movieMentions field of the Dialogue object.\ntimeOffset: time since start of dialogue in seconds\nsenderWorkerId: the ID of the worker sending the message, either initiatorWorkerId or respondentWorkerId.\n\nEach Message of messages_translated contains the same struct with the text translated to portuguese.\n\nThe labels in initiatorQuestions and respondentQuestions have the following meaning:\n*suggested:* 0 if it was mentioned by the seeker, 1 if it was a suggestion from the recommender\n*seen:* 0 if the seeker has not seen the movie, 1 if they have seen it, 2 if they did not say\n*liked:* 0 if the seeker did not like the movie, 1 if they liked it, 2 if they did not say",
"### Data Splits\n\nThe original dataset contains a total of 11348 dialogues, 10006 for training and model selection, and 1342 for testing.\nThis translated version has near values but 10% reduced in train split.",
"### Contributions\n\nThis work have has done by matheusrdg and wfco.\nThe translation of this dataset was made possible thanks to the Maritalk API."
]
| [
"TAGS\n#task_categories-text-classification #task_categories-text2text-generation #task_categories-conversational #task_categories-translation #size_categories-10K<n<100K #language-Portuguese #language-English #license-mit #conversational recommendation #recommendation #conversational #region-us \n",
"# Dataset Card for ReDial - PTBR\n\n- Original dataset: Redial Huggingface\n- Homepage: ReDial Dataset\n- Repository: ReDialData\n- Paper: Towards Deep Conversational Recommendations",
"### Dataset Summary\n\nThe ReDial (Recommendation Dialogues) PTBR dataset is an annotated collection of dialogues where users recommend movies to each other translated to brazilian portuguese.\n\nThe adapted version of this dataset in Brazilian Portuguese was translated by the Maritalk. This translated version opens up opportunities fo research at the intersection of goal-directed dialogue systems (such as restaurant recommendations) and free-form, colloquial dialogue systems.\n\nSome samples from the original dataset have been removed as we've reached the usage limit in Maritalk. Consequently, the training set has been reduced by nearly 10%.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nEnglish and Portuguese.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\nThe dataset is published in the “jsonl” format, i.e., as a text file where each line corresponds to a Dialogue given as a valid JSON document.\n\nA Dialogue contains these fields:\n\nconversationId: an integer\ninitiatorWorkerId: an integer identifying to the worker initiating the conversation (the recommendation seeker)\nrespondentWorkerId: an integer identifying the worker responding to the initiator (the recommender)\nmessages: a list of Message objects\nmessages_translated: a list of Message objects\nmovieMentions: a dict mapping movie IDs mentioned in this dialogue to movie names\ninitiatorQuestions: a dictionary mapping movie IDs to the labels supplied by the initiator. Each label is a bool corresponding to whether the initiator has said he saw the movie, liked it, or suggested it.\nrespondentQuestions: a dictionary mapping movie IDs to the labels supplied by the respondent. Each label is a bool corresponding to whether the initiator has said he saw the movie, liked it, or suggested it.\nEach Message of messages contains these fields:\n\nmessageId: a unique ID for this message\ntext: a string with the actual message. The string may contain a token starting with @ followed by an integer. This is a movie ID which can be looked up in the movieMentions field of the Dialogue object.\ntimeOffset: time since start of dialogue in seconds\nsenderWorkerId: the ID of the worker sending the message, either initiatorWorkerId or respondentWorkerId.\n\nEach Message of messages_translated contains the same struct with the text translated to portuguese.\n\nThe labels in initiatorQuestions and respondentQuestions have the following meaning:\n*suggested:* 0 if it was mentioned by the seeker, 1 if it was a suggestion from the recommender\n*seen:* 0 if the seeker has not seen the movie, 1 if they have seen it, 2 if they did not say\n*liked:* 0 if the seeker did not like the movie, 1 if they liked it, 2 if they did not say",
"### Data Splits\n\nThe original dataset contains a total of 11348 dialogues, 10006 for training and model selection, and 1342 for testing.\nThis translated version has near values but 10% reduced in train split.",
"### Contributions\n\nThis work have has done by matheusrdg and wfco.\nThe translation of this dataset was made possible thanks to the Maritalk API."
]
| [
91,
52,
151,
10,
10,
6,
6,
495,
49,
36
]
| [
"passage: TAGS\n#task_categories-text-classification #task_categories-text2text-generation #task_categories-conversational #task_categories-translation #size_categories-10K<n<100K #language-Portuguese #language-English #license-mit #conversational recommendation #recommendation #conversational #region-us \n# Dataset Card for ReDial - PTBR\n\n- Original dataset: Redial Huggingface\n- Homepage: ReDial Dataset\n- Repository: ReDialData\n- Paper: Towards Deep Conversational Recommendations### Dataset Summary\n\nThe ReDial (Recommendation Dialogues) PTBR dataset is an annotated collection of dialogues where users recommend movies to each other translated to brazilian portuguese.\n\nThe adapted version of this dataset in Brazilian Portuguese was translated by the Maritalk. This translated version opens up opportunities fo research at the intersection of goal-directed dialogue systems (such as restaurant recommendations) and free-form, colloquial dialogue systems.\n\nSome samples from the original dataset have been removed as we've reached the usage limit in Maritalk. Consequently, the training set has been reduced by nearly 10%.### Supported Tasks and Leaderboards### Languages\n\nEnglish and Portuguese.## Dataset Structure### Data Instances"
]
|
231efb0d96b50e6e8464efc24a5dd0881e855b0a |
# Dataset Card for end2end_textclassification
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/end2end_textclassification")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/end2end_textclassification")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| text | Text | text | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| label | Label | label_selection | True | N/A | ['World', 'Sports', 'Business', 'Sci/Tech'] |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": "record-0",
"fields": {
"text": "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street\u0027s dwindling\\band of ultra-cynics, are seeing green again."
},
"metadata": {},
"responses": [],
"suggestions": [],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": "record-0",
"label": [],
"label-suggestion": null,
"label-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"metadata": "{}",
"text": "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street\u0027s dwindling\\band of ultra-cynics, are seeing green again."
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **text** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **label** is of type `label_selection` with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **label-suggestion** is of type `label_selection` with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Classify the articles into one of the four categories.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | argilla/end2end_textclassification | [
"size_categories:1K<n<10K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
]
| 2023-11-13T17:25:52+00:00 | {"size_categories": "1K<n<10K", "tags": ["rlfh", "argilla", "human-feedback"]} | 2024-02-13T00:56:09+00:00 | []
| []
| TAGS
#size_categories-1K<n<10K #rlfh #argilla #human-feedback #region-us
| Dataset Card for end2end\_textclassification
============================================
This dataset has been created with Argilla.
As shown in the sections below, this dataset can be loaded into Argilla as explained in Load with Argilla, or used directly with the 'datasets' library in Load with 'datasets'.
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\_huggingface' method in Argilla.
* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\_huggingface' and can be loaded independently using the 'datasets' library via 'load\_dataset'.
* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:
### Load with 'datasets'
To load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:
### Supported Tasks and Leaderboards
This dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.
There are no leaderboards associated with this dataset.
### Languages
Dataset Structure
-----------------
### Data in Argilla
The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.
The fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\_selection, multi\_label\_selection, or ranking.
The suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'.
The guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
While the same record in HuggingFace 'datasets' looks as follows:
### Data Fields
Among the dataset fields, we differentiate between the following:
* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
+ text is of type 'text'.
* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.
+ label is of type 'label\_selection' with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].
* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
+ (optional) label-suggestion is of type 'label\_selection' with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].
Additionally, we also have two more fields that are optional and are the following:
* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'.
* external\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is 'train'.
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation guidelines
Classify the articles into one of the four categories.
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
| [
"### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.",
"### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:",
"### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:",
"### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.",
"### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:",
"### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ text is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ label is of type 'label\\_selection' with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) label-suggestion is of type 'label\\_selection' with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.",
"### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation guidelines\n\n\nClassify the articles into one of the four categories.",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#size_categories-1K<n<10K #rlfh #argilla #human-feedback #region-us \n",
"### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.",
"### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:",
"### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:",
"### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.",
"### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:",
"### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ text is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ label is of type 'label\\_selection' with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) label-suggestion is of type 'label\\_selection' with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.",
"### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation guidelines\n\n\nClassify the articles into one of the four categories.",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
29,
162,
40,
53,
68,
11,
404,
40,
498,
27,
7,
4,
10,
10,
5,
17,
5,
9,
18,
7,
8,
14,
6,
6,
5
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #rlfh #argilla #human-feedback #region-us \n### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.### Languages\n\n\nDataset Structure\n-----------------",
"passage: ### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:"
]
|
b352145b68c006d2ead56a7ffd988983c547cccc | # Professor HeidelTime
[Paper](https://dl.acm.org/doi/10.1145/3583780.3615130) [GitHub](https://github.com/hmosousa/professor_heideltime)
Professor HeidelTime is a project to create a multilingual corpus weakly labeled with [HeidelTime](https://github.com/HeidelTime/heideltime), a temporal tagger.
## Corpus Details
The weak labeling was performed in six languages. Here are the specifics of the corpus for each language:
| Dataset | Language | Documents | From | To | Tokens | Timexs |
| ----------------------- | -------- | --------- | ---------- | ---------- | ---------- | -------- |
| All the News 2.0 | EN | 24,642 | 2016-01-01 | 2020-04-02 | 18,755,616 | 254,803 |
| Italian Crime News | IT | 9,619 | 2011-01-01 | 2021-12-31 | 3,296,898 | 58,823 |
| German News Dataset | DE | 33,266 | 2003-01-01 | 2022-12-31 | 21,617,888 | 348,011 |
| ElMundo News | ES | 19,095 | 2005-12-02 | 2021-10-18 | 12,515,410 | 194,043 |
| French Financial News | FR | 24,293 | 2017-10-19 | 2021-03-19 | 1,673,053 | 83,431 |
| Público News | PT | 27,154 | 2000-11-14 | 2002-03-20 | 5,929,377 | 111,810 |
## Contact
For more information, reach out to [Hugo Sousa](https://hugosousa.net) at <[email protected]>.
This framework is a part of the [Text2Story](https://text2story.inesctec.pt) project. This project is financed by the ERDF – European Regional Development Fund through the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 and by National Funds through the Portuguese funding agency, FCT - Fundação para a Ciência e a Tecnologia within project PTDC/CCI-COM/31857/2017 (NORTE-01-0145-FEDER-03185).
## Cite
If you use this work, please cite the following [paper](https://dl.acm.org/doi/10.1145/3583780.3615130):
```bibtex
@inproceedings{10.1145/3583780.3615130,
author = {Sousa, Hugo and Campos, Ricardo and Jorge, Al\'{\i}pio},
title = {TEI2GO: A Multilingual Approach for Fast Temporal Expression Identification},
year = {2023},
isbn = {9798400701245},
publisher = {Association for Computing Machinery},
url = {https://doi.org/10.1145/3583780.3615130},
doi = {10.1145/3583780.3615130},
booktitle = {Proceedings of the 32nd ACM International Conference on Information and Knowledge Management},
pages = {5401–5406},
numpages = {6},
keywords = {temporal expression identification, multilingual corpus, weak label},
location = {Birmingham, United Kingdom},
series = {CIKM '23}
}
```
| hugosousa/ProfessorHeidelTime | [
"task_categories:token-classification",
"task_ids:parsing",
"task_ids:part-of-speech",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"language:fr",
"language:pt",
"language:de",
"language:it",
"language:es",
"license:mit",
"Timex",
"Timexs",
"Temporal Expression",
"Temporal Expressions",
"Temporal Information",
"Timex Identification",
"Timex Classification",
"Timex Extraction",
"region:us"
]
| 2023-11-13T17:31:13+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en", "fr", "pt", "de", "fr", "it", "es"], "license": ["mit"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["parsing", "part-of-speech", "named-entity-recognition"], "pretty_name": "Professor HeidelTime", "tags": ["Timex", "Timexs", "Temporal Expression", "Temporal Expressions", "Temporal Information", "Timex Identification", "Timex Classification", "Timex Extraction"], "configs": [{"config_name": "portuguese", "data_files": "portuguese.json"}, {"config_name": "english", "data_files": "english.json"}, {"config_name": "french", "data_files": "french.json"}, {"config_name": "italian", "data_files": "italian.json"}, {"config_name": "spanish", "data_files": "spanish.json"}, {"config_name": "german", "data_files": "german.json"}]} | 2023-11-13T17:43:54+00:00 | []
| [
"en",
"fr",
"pt",
"de",
"it",
"es"
]
| TAGS
#task_categories-token-classification #task_ids-parsing #task_ids-part-of-speech #task_ids-named-entity-recognition #annotations_creators-machine-generated #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-English #language-French #language-Portuguese #language-German #language-Italian #language-Spanish #license-mit #Timex #Timexs #Temporal Expression #Temporal Expressions #Temporal Information #Timex Identification #Timex Classification #Timex Extraction #region-us
| Professor HeidelTime
====================
Paper GitHub
Professor HeidelTime is a project to create a multilingual corpus weakly labeled with HeidelTime, a temporal tagger.
Corpus Details
--------------
The weak labeling was performed in six languages. Here are the specifics of the corpus for each language:
Contact
-------
For more information, reach out to Hugo Sousa at [hugo.o.sousa@URL](mailto:hugo.o.sousa@URL).
This framework is a part of the Text2Story project. This project is financed by the ERDF – European Regional Development Fund through the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 and by National Funds through the Portuguese funding agency, FCT - Fundação para a Ciência e a Tecnologia within project PTDC/CCI-COM/31857/2017 (NORTE-01-0145-FEDER-03185).
Cite
----
If you use this work, please cite the following paper:
| []
| [
"TAGS\n#task_categories-token-classification #task_ids-parsing #task_ids-part-of-speech #task_ids-named-entity-recognition #annotations_creators-machine-generated #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-English #language-French #language-Portuguese #language-German #language-Italian #language-Spanish #license-mit #Timex #Timexs #Temporal Expression #Temporal Expressions #Temporal Information #Timex Identification #Timex Classification #Timex Extraction #region-us \n"
]
| [
173
]
| [
"passage: TAGS\n#task_categories-token-classification #task_ids-parsing #task_ids-part-of-speech #task_ids-named-entity-recognition #annotations_creators-machine-generated #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-English #language-French #language-Portuguese #language-German #language-Italian #language-Spanish #license-mit #Timex #Timexs #Temporal Expression #Temporal Expressions #Temporal Information #Timex Identification #Timex Classification #Timex Extraction #region-us \n"
]
|
bce699ffd250fbd7921d7e221124a71f2069e055 | # Concatenated STS datasets, translated to Norwegian Bokmål
Machine translated using the *No language left behind* model series, specifically the 1.3B variant: https://huggingface.co/facebook/nllb-200-distilled-1.3B
This dataset contains the following data:
```
'tollefj/biosses-sts-NOB',
'tollefj/sickr-sts-NOB',
'tollefj/sts12-sts-NOB',
'tollefj/sts13-sts-NOB',
'tollefj/sts14-sts-NOB',
'tollefj/sts15-sts-NOB',
'tollefj/sts16-sts-NOB'
``` | tollefj/sts-concatenated-NOB | [
"task_categories:sentence-similarity",
"task_categories:text-classification",
"language:no",
"language:nb",
"license:cc-by-4.0",
"region:us"
]
| 2023-11-13T17:41:44+00:00 | {"language": ["no", "nb"], "license": "cc-by-4.0", "task_categories": ["sentence-similarity", "text-classification"]} | 2024-01-06T12:25:06+00:00 | []
| [
"no",
"nb"
]
| TAGS
#task_categories-sentence-similarity #task_categories-text-classification #language-Norwegian #language-Norwegian Bokmål #license-cc-by-4.0 #region-us
| # Concatenated STS datasets, translated to Norwegian Bokmål
Machine translated using the *No language left behind* model series, specifically the 1.3B variant: URL
This dataset contains the following data:
| [
"# Concatenated STS datasets, translated to Norwegian Bokmål\nMachine translated using the *No language left behind* model series, specifically the 1.3B variant: URL\n\nThis dataset contains the following data:"
]
| [
"TAGS\n#task_categories-sentence-similarity #task_categories-text-classification #language-Norwegian #language-Norwegian Bokmål #license-cc-by-4.0 #region-us \n",
"# Concatenated STS datasets, translated to Norwegian Bokmål\nMachine translated using the *No language left behind* model series, specifically the 1.3B variant: URL\n\nThis dataset contains the following data:"
]
| [
53,
49
]
| [
"passage: TAGS\n#task_categories-sentence-similarity #task_categories-text-classification #language-Norwegian #language-Norwegian Bokmål #license-cc-by-4.0 #region-us \n# Concatenated STS datasets, translated to Norwegian Bokmål\nMachine translated using the *No language left behind* model series, specifically the 1.3B variant: URL\n\nThis dataset contains the following data:"
]
|
87e260a0c3f2c1333f6971ceb75ea342133571f3 |
Fuites entre l'échantillon d'entraînement et l'échantillon de test : 0 lignes soit 0.0 %
Fuites entre l'échantillon de validation et l'échantillon de test : 0 lignes soit 0.0 %
Lignes dupliquées dans l'échantillon d'entraînement : 2 lignes soit 0.013 %
Lignes dupliquées dans l'échantillon de validation : 0 lignes soit 0.0 %
Lignes dupliquées dans l'échantillon de test : 0 lignes soit 0.0 % | bourdoiscatie/multiconer_fr_clean | [
"region:us"
]
| 2023-11-13T17:48:56+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 3973230, "num_examples": 15539}, {"name": "validation", "num_bytes": 216957, "num_examples": 827}, {"name": "test", "num_bytes": 223148, "num_examples": 855}], "download_size": 1301808, "dataset_size": 4413335}} | 2024-01-17T09:16:04+00:00 | []
| []
| TAGS
#region-us
|
Fuites entre l'échantillon d'entraînement et l'échantillon de test : 0 lignes soit 0.0 %
Fuites entre l'échantillon de validation et l'échantillon de test : 0 lignes soit 0.0 %
Lignes dupliquées dans l'échantillon d'entraînement : 2 lignes soit 0.013 %
Lignes dupliquées dans l'échantillon de validation : 0 lignes soit 0.0 %
Lignes dupliquées dans l'échantillon de test : 0 lignes soit 0.0 % | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
8fdb0f8dae8d3ee228269f8a8c83620de5a48677 | # Dataset Card for "10_RANDOM_WAT_FILTERED_LINKS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cc-platform-links/10_RANDOM_WAT_FILTERED_LINKS | [
"region:us"
]
| 2023-11-13T17:51:47+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1505397.4479696343, "num_examples": 19940}], "download_size": 474964, "dataset_size": 1505397.4479696343}} | 2023-11-13T17:51:54+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "10_RANDOM_WAT_FILTERED_LINKS"
More Information needed | [
"# Dataset Card for \"10_RANDOM_WAT_FILTERED_LINKS\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"10_RANDOM_WAT_FILTERED_LINKS\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"10_RANDOM_WAT_FILTERED_LINKS\"\n\nMore Information needed"
]
|
142d6e5843729c61feb0fdd4dade39e5df35ce38 | # Dataset Card for "CASF"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Lollitor/CASF | [
"region:us"
]
| 2023-11-13T18:28:09+00:00 | {"dataset_info": {"features": [{"name": "#code", "dtype": "string"}, {"name": "inputs", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 310419, "num_examples": 285}], "download_size": 110166, "dataset_size": 310419}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T18:28:12+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "CASF"
More Information needed | [
"# Dataset Card for \"CASF\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"CASF\"\n\nMore Information needed"
]
| [
6,
12
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"CASF\"\n\nMore Information needed"
]
|
6f974c77691c2c211044c1c64521e1d6b615cc06 |
# Dataset Card for TrustLLM
## Dataset Summary
This repository provides datasets from the TrustLLM benchmark, including six aspects: truthfulness, safety, fairness, robustness, privacy, and machine ethics.
To find more details about TrustLLM, please visit the [project website](https://trustllmbenchmark.github.io/TrustLLM-Website/).
## Disclaimer
The dataset contains harmful content such as partial pornography, violence, bloodshed, or bias. The opinions expressed in the data do not reflect the views of the TrustLLM team. This dataset is strictly intended for research purposes and should not be used for any other illegal activities. We advocate for the responsible use of large language models.
### Download
Use `trustllm` toolkit to download the dataset: [link](https://howiehwong.github.io/TrustLLM/#dataset-download).
Use `hugginface` to download the dataset:
```python
from datasets import load_dataset
# Load all sections
dataset = load_dataset("TrustLLM/TrustLLM-dataset")
# Load one of the sections
dataset = load_dataset("TrustLLM/TrustLLM-dataset", data_dir="safety")
```
## Contact
Contact Us: [[email protected]](mailto:[email protected])
| TrustLLM/TrustLLM-dataset | [
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"llm",
"trustworthy ai",
"nlp",
"region:us"
]
| 2023-11-13T18:51:03+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "configs": [{"config_name": "safety", "data_files": "safety/*json"}, {"config_name": "ethics", "data_files": "ethics/*json"}, {"config_name": "fairness", "data_files": "fairness/*json"}, {"config_name": "robustness", "data_files": "robustness/*json"}, {"config_name": "privacy", "data_files": "privacy/*json"}, {"config_name": "truthfulness", "data_files": "truthfulness/*json"}], "tags": ["llm", "trustworthy ai", "nlp"]} | 2024-01-31T11:28:51+00:00 | []
| [
"en"
]
| TAGS
#size_categories-10K<n<100K #language-English #license-apache-2.0 #llm #trustworthy ai #nlp #region-us
|
# Dataset Card for TrustLLM
## Dataset Summary
This repository provides datasets from the TrustLLM benchmark, including six aspects: truthfulness, safety, fairness, robustness, privacy, and machine ethics.
To find more details about TrustLLM, please visit the project website.
## Disclaimer
The dataset contains harmful content such as partial pornography, violence, bloodshed, or bias. The opinions expressed in the data do not reflect the views of the TrustLLM team. This dataset is strictly intended for research purposes and should not be used for any other illegal activities. We advocate for the responsible use of large language models.
### Download
Use 'trustllm' toolkit to download the dataset: link.
Use 'hugginface' to download the dataset:
## Contact
Contact Us: trustllm.benchmark@URL
| [
"# Dataset Card for TrustLLM",
"## Dataset Summary\n\nThis repository provides datasets from the TrustLLM benchmark, including six aspects: truthfulness, safety, fairness, robustness, privacy, and machine ethics.\n\nTo find more details about TrustLLM, please visit the project website.",
"## Disclaimer\n\nThe dataset contains harmful content such as partial pornography, violence, bloodshed, or bias. The opinions expressed in the data do not reflect the views of the TrustLLM team. This dataset is strictly intended for research purposes and should not be used for any other illegal activities. We advocate for the responsible use of large language models.",
"### Download\n\nUse 'trustllm' toolkit to download the dataset: link.\n\nUse 'hugginface' to download the dataset:",
"## Contact\n\nContact Us: trustllm.benchmark@URL"
]
| [
"TAGS\n#size_categories-10K<n<100K #language-English #license-apache-2.0 #llm #trustworthy ai #nlp #region-us \n",
"# Dataset Card for TrustLLM",
"## Dataset Summary\n\nThis repository provides datasets from the TrustLLM benchmark, including six aspects: truthfulness, safety, fairness, robustness, privacy, and machine ethics.\n\nTo find more details about TrustLLM, please visit the project website.",
"## Disclaimer\n\nThe dataset contains harmful content such as partial pornography, violence, bloodshed, or bias. The opinions expressed in the data do not reflect the views of the TrustLLM team. This dataset is strictly intended for research purposes and should not be used for any other illegal activities. We advocate for the responsible use of large language models.",
"### Download\n\nUse 'trustllm' toolkit to download the dataset: link.\n\nUse 'hugginface' to download the dataset:",
"## Contact\n\nContact Us: trustllm.benchmark@URL"
]
| [
41,
8,
59,
79,
31,
14
]
| [
"passage: TAGS\n#size_categories-10K<n<100K #language-English #license-apache-2.0 #llm #trustworthy ai #nlp #region-us \n# Dataset Card for TrustLLM## Dataset Summary\n\nThis repository provides datasets from the TrustLLM benchmark, including six aspects: truthfulness, safety, fairness, robustness, privacy, and machine ethics.\n\nTo find more details about TrustLLM, please visit the project website.## Disclaimer\n\nThe dataset contains harmful content such as partial pornography, violence, bloodshed, or bias. The opinions expressed in the data do not reflect the views of the TrustLLM team. This dataset is strictly intended for research purposes and should not be used for any other illegal activities. We advocate for the responsible use of large language models.### Download\n\nUse 'trustllm' toolkit to download the dataset: link.\n\nUse 'hugginface' to download the dataset:## Contact\n\nContact Us: trustllm.benchmark@URL"
]
|
37a64bd481945c50d7524fdea537303498e96704 | # Dataset Card for "MedicalTranscriptions_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hippocrates/MedicalTranscriptions_test | [
"region:us"
]
| 2023-11-13T18:51:22+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "gold", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 24235832, "num_examples": 4999}, {"name": "valid", "num_bytes": 2251838, "num_examples": 500}, {"name": "test", "num_bytes": 5226991, "num_examples": 999}], "download_size": 10111285, "dataset_size": 31714661}} | 2023-11-13T18:51:25+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "MedicalTranscriptions_test"
More Information needed | [
"# Dataset Card for \"MedicalTranscriptions_test\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"MedicalTranscriptions_test\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"MedicalTranscriptions_test\"\n\nMore Information needed"
]
|
e9b8429001834b99df171707b141fda5da83a8dd |
# Dataset information
Dataset concatenating all NER datasets, available in French and open-source, for 3 entities (LOC, PER, ORG).
There are a total of **420,264** rows, of which 346,071 are for training, 32,951 for validation and 41,242 for testing.
Our methodology is described in a blog post available in [English](https://blog.vaniila.ai/en/NER_en/) or [French](https://blog.vaniila.ai/NER/).
# Usage
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/frenchNER_3entities")
```
# Dataset
## Details of rows
| Dataset Original | Splits | Note |
| ----------- | ----------- | ----------- |
| [Multiconer](https://huggingface.co/datasets/aashsach/multiconer2)| 16,548 train / 857 validation / 0 test | In practice, we use the original validation set as test set<br> and creat a new val set from 5% of train created, i.e.<br> 15,721 train / 827 validation / 857 test|
| [Multinerd](https://huggingface.co/datasets/Babelscape/multinerd)| 140,880 train / 17,610 val / 17,695 test | |
| [Pii-masking-200k](https://huggingface.co/datasets/ai4privacy/pii-masking-200k)| 61,958 train / 0 validation / 0 test | Only dataset without duplicate data or leaks |
| [Wikiann](https://huggingface.co/datasets/wikiann)| 20,000 train / 10,000 val / 10,000 test | |
| [Wikiner](https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr)| 120,682 train / 0 validation / 13,410 test | In practice, 5% of val created from train set, i.e.<br> 113,296 train / 5,994 validation / 13,393 test |
## Removing duplicate data and leaks
The sum of the values of the datasets listed here gives the following result:
```
DatasetDict({
train: Dataset({
features: ['tokens', 'ner_tags', 'dataset'],
num_rows: 351855
})
validation: Dataset({
features: ['tokens', 'ner_tags', 'dataset'],
num_rows: 34431
})
test: Dataset({
features: ['tokens', 'ner_tags', 'dataset'],
num_rows: 41945
})
})
```
However, a data item in training split A may not be in A's test split, but may be present in B's test set, creating a leak when we create the A+B dataset.
The same logic applies to duplicate data. So we need to make sure we remove them.
After our clean-up, we finally have the following numbers:
```
DatasetDict({
train: Dataset({
features: ['tokens', 'ner_tags', 'dataset'],
num_rows: 346071
})
validation: Dataset({
features: ['tokens', 'ner_tags', 'dataset'],
num_rows: 32951
})
test: Dataset({
features: ['tokens', 'ner_tags', 'dataset'],
num_rows: 41242
})
})
```
Note: in practice, the test split contains 8 lines which we failed to deduplicate, i.e. 0.019%.
### Details of entities (after cleaning)
<table>
<thead>
<tr>
<th><br>Datasets</th>
<th><br>Splits</th>
<th><br>O</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3"><br>Multiconer</td>
<td><br>train</td>
<td><br>200,093</td>
<td><br>18,060</td>
<td><br>7,165</td>
<td><br>6,967</td>
</tr>
<tr>
<td><br>validation</td>
<td><br>10,900</td>
<td><br>1,069</td>
<td><br>389</td>
<td><br>328</td>
</tr>
<tr>
<td><br>test</td>
<td><br>11,287</td>
<td><br>979</td>
<td><br>387</td>
<td><br>381</td>
</tr>
<tr>
<td rowspan="3"><br>Multinerd</td>
<td><br>train</td>
<td><br>3,041,998</td>
<td><br>149,128</td>
<td><br>105,531</td>
<td><br>68,796</td>
</tr>
<tr>
<td><br>validation</td>
<td><br>410,934</td>
<td><br>17,479</td>
<td><br>13,988</td>
<td><br>3,475</td>
</tr>
<tr>
<td><br>test</td>
<td><br>417,886</td>
<td><br>18,567</td>
<td><br>14,083</td>
<td><br>3,636</td>
</tr>
<tr>
<td rowspan="1"><br>Pii-masking-200k</td>
<td><br>train</td>
<td><br>2,405,215</td>
<td><br>29,838</td>
<td><br>42,154</td>
<td><br>12,310</td>
</tr>
<tr>
<td rowspan="3"><br>Wikiann</td>
<td><br>train</td>
<td><br>60,165</td>
<td><br>20,288</td>
<td><br>17,033</td>
<td><br>24,429</td>
</tr>
<tr>
<td><br>validation</td>
<td><br>30,046</td>
<td><br>10,098</td>
<td><br>8,698</td>
<td><br>12,819</td>
</tr>
<tr>
<td><br>test</td>
<td><br>31,488</td>
<td><br>10,764</td>
<td><br>9,512</td>
<td><br>13,480</td>
</tr>
<tr>
<td rowspan="3"><br>Wikiner</td>
<td><br>train</td>
<td><br>2,691,294</td>
<td><br>110,079</td>
<td><br>131,839</td>
<td><br>38,988</td>
</tr>
<tr>
<td><br>validation</td>
<td><br>140,935</td>
<td><br>5,481</td>
<td><br>7,204</td>
<td><br>2,121</td>
</tr>
<tr>
<td><br>test</td>
<td><br>313,210</td>
<td><br>13,324</td>
<td><br>15,213</td>
<td><br>3,894</td>
</tr>
<tr>
<td rowspan="3"><br>Total</td>
<td><br>train</td>
<td><br><b>8,398,765</b></td>
<td><br><b>327,393</b></td>
<td><br><b>303,722</b></td>
<td><br><b>151,490</b></td>
</tr>
<tr>
<td><br>validation</td>
<td><br><b>592,815</b></td>
<td><br><b>34,127</b></td>
<td><br><b>30,279</b></td>
<td><br><b>18,743</b></td>
</tr>
<tr>
<td><br>test</td>
<td><br><b>773,871</b></td>
<td><br><b>43,634</b></td>
<td><br><b>39,195</b></td>
<td><br><b>21,391</b></td>
</tr>
</tbody>
</table>
## Columns
```
dataset_train = dataset['train'].to_pandas()
dataset_train.head()
tokens ner_tags dataset
0 [On, a, souvent, voulu, faire, de, La, Bruyère... [0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, ... wikiner
1 [Les, améliorations, apportées, par, rapport, ... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, ... wikiner
2 [Cette, assemblée, de, notables, ,, réunie, en... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, ... wikiner
3 [Wittgenstein, projetait, en, effet, d', élabo... [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... wikiner
4 [Le, premier, écrivain, à, écrire, des, fictio... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, ... wikiner
```
- the `tokens` column contains the tokens
- the `ner_tags` column contains the NER tags (IOB format with 0="O", 1="PER", 2="ORG" and 3="LOC")
- the `dataset` column identifies the row's original dataset (if you wish to apply filters to it)
## Split
- `train` corresponds to the concatenation of `multiconer` + `multinerd` + `pii-masking-200k` + `wikiann` + `wikiner`
- `validation` corresponds to the concatenation of `multiconer` + `multinerd` + `wikiann` + `wikiner`
- `test` corresponds to the concatenation of `multiconer` + `multinerd` + `wikiann` + `wikiner`
# Citations
### multiconer
```
@inproceedings{multiconer2-report,
title={{SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)}},
author={Fetahu, Besnik and Kar, Sudipta and Chen, Zhiyu and Rokhlenko, Oleg and Malmasi, Shervin},
booktitle={Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)},
year={2023},
publisher={Association for Computational Linguistics}}
@article{multiconer2-data,
title={{MultiCoNER v2: a Large Multilingual dataset for Fine-grained and Noisy Named Entity Recognition}},
author={Fetahu, Besnik and Chen, Zhiyu and Kar, Sudipta and Rokhlenko, Oleg and Malmasi, Shervin},
year={2023}}
```
### multinerd
```
@inproceedings{tedeschi-navigli-2022-multinerd,
title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)",
author = "Tedeschi, Simone and Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-naacl.60",
doi = "10.18653/v1/2022.findings-naacl.60",
pages = "801--812"}
```
### pii-masking-200k
```
@misc {ai4privacy_2023,
author = { {ai4Privacy} },
title = { pii-masking-200k (Revision 1d4c0a1) },
year = 2023,
url = { https://huggingface.co/datasets/ai4privacy/pii-masking-200k },
doi = { 10.57967/hf/1532 },
publisher = { Hugging Face }}
```
### wikiann
```
@inproceedings{rahimi-etal-2019-massively,
title = "Massively Multilingual Transfer for {NER}",
author = "Rahimi, Afshin and Li, Yuan and Cohn, Trevor",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1015",
pages = "151--164"}
```
### wikiner
```
@article{NOTHMAN2013151,
title = {Learning multilingual named entity recognition from Wikipedia},
journal = {Artificial Intelligence},
volume = {194},
pages = {151-175},
year = {2013},
note = {Artificial Intelligence, Wikipedia and Semi-Structured Resources},
issn = {0004-3702},
doi = {https://doi.org/10.1016/j.artint.2012.03.006},
url = {https://www.sciencedirect.com/science/article/pii/S0004370212000276},
author = {Joel Nothman and Nicky Ringland and Will Radford and Tara Murphy and James R. Curran}}
```
### frenchNER_3entities
```
@misc {frenchNER2024,
author = { {BOURDOIS, Loïck} },
organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { frenchNER_3entities },
year = 2024,
url = { https://huggingface.co/CATIE-AQ/frenchNER_3entities },
doi = { 10.57967/hf/1751 },
publisher = { Hugging Face }
}
```
# License
[cc-by-4.0](https://creativecommons.org/licenses/by/4.0/deed.en) | CATIE-AQ/frenchNER_3entities | [
"task_categories:token-classification",
"size_categories:100K<n<1M",
"language:fr",
"license:cc-by-4.0",
"region:us"
]
| 2023-11-13T18:59:10+00:00 | {"language": ["fr"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["token-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "int64"}, {"name": "dataset", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 16147720, "num_examples": 42144}, {"name": "train", "num_bytes": 161576681, "num_examples": 349195}, {"name": "validation", "num_bytes": 12398792, "num_examples": 33464}], "download_size": 43074463, "dataset_size": 190123193}} | 2024-02-07T09:09:32+00:00 | []
| [
"fr"
]
| TAGS
#task_categories-token-classification #size_categories-100K<n<1M #language-French #license-cc-by-4.0 #region-us
| Dataset information
===================
Dataset concatenating all NER datasets, available in French and open-source, for 3 entities (LOC, PER, ORG).
There are a total of 420,264 rows, of which 346,071 are for training, 32,951 for validation and 41,242 for testing.
Our methodology is described in a blog post available in English or French.
Usage
=====
Dataset
=======
Details of rows
---------------
Dataset Original: Multiconer, Splits: 16,548 train / 857 validation / 0 test, Note: In practice, we use the original validation set as test set
and creat a new val set from 5% of train created, i.e.
15,721 train / 827 validation / 857 test
Dataset Original: Multinerd, Splits: 140,880 train / 17,610 val / 17,695 test, Note:
Dataset Original: Pii-masking-200k, Splits: 61,958 train / 0 validation / 0 test, Note: Only dataset without duplicate data or leaks
Dataset Original: Wikiann, Splits: 20,000 train / 10,000 val / 10,000 test, Note:
Dataset Original: Wikiner, Splits: 120,682 train / 0 validation / 13,410 test, Note: In practice, 5% of val created from train set, i.e.
113,296 train / 5,994 validation / 13,393 test
Removing duplicate data and leaks
---------------------------------
The sum of the values of the datasets listed here gives the following result:
However, a data item in training split A may not be in A's test split, but may be present in B's test set, creating a leak when we create the A+B dataset.
Note: in practice, the test split contains 8 lines which we failed to deduplicate, i.e. 0.019%.
### Details of entities (after cleaning)
Columns
-------
* the 'tokens' column contains the tokens
* the 'ner\_tags' column contains the NER tags (IOB format with 0="O", 1="PER", 2="ORG" and 3="LOC")
* the 'dataset' column identifies the row's original dataset (if you wish to apply filters to it)
Split
-----
* 'train' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'pii-masking-200k' + 'wikiann' + 'wikiner'
* 'validation' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiann' + 'wikiner'
* 'test' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiann' + 'wikiner'
s
### multiconer
### multinerd
### pii-masking-200k
### wikiann
### wikiner
### frenchNER\_3entities
License
=======
cc-by-4.0
| [
"### Details of entities (after cleaning)\n\n\n\nColumns\n-------\n\n\n* the 'tokens' column contains the tokens\n* the 'ner\\_tags' column contains the NER tags (IOB format with 0=\"O\", 1=\"PER\", 2=\"ORG\" and 3=\"LOC\")\n* the 'dataset' column identifies the row's original dataset (if you wish to apply filters to it)\n\n\nSplit\n-----\n\n\n* 'train' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'pii-masking-200k' + 'wikiann' + 'wikiner'\n* 'validation' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiann' + 'wikiner'\n* 'test' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiann' + 'wikiner'\n\n\ns",
"### multiconer",
"### multinerd",
"### pii-masking-200k",
"### wikiann",
"### wikiner",
"### frenchNER\\_3entities\n\n\nLicense\n=======\n\n\ncc-by-4.0"
]
| [
"TAGS\n#task_categories-token-classification #size_categories-100K<n<1M #language-French #license-cc-by-4.0 #region-us \n",
"### Details of entities (after cleaning)\n\n\n\nColumns\n-------\n\n\n* the 'tokens' column contains the tokens\n* the 'ner\\_tags' column contains the NER tags (IOB format with 0=\"O\", 1=\"PER\", 2=\"ORG\" and 3=\"LOC\")\n* the 'dataset' column identifies the row's original dataset (if you wish to apply filters to it)\n\n\nSplit\n-----\n\n\n* 'train' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'pii-masking-200k' + 'wikiann' + 'wikiner'\n* 'validation' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiann' + 'wikiner'\n* 'test' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiann' + 'wikiner'\n\n\ns",
"### multiconer",
"### multinerd",
"### pii-masking-200k",
"### wikiann",
"### wikiner",
"### frenchNER\\_3entities\n\n\nLicense\n=======\n\n\ncc-by-4.0"
]
| [
45,
218,
5,
5,
9,
4,
4,
20
]
| [
"passage: TAGS\n#task_categories-token-classification #size_categories-100K<n<1M #language-French #license-cc-by-4.0 #region-us \n### Details of entities (after cleaning)\n\n\n\nColumns\n-------\n\n\n* the 'tokens' column contains the tokens\n* the 'ner\\_tags' column contains the NER tags (IOB format with 0=\"O\", 1=\"PER\", 2=\"ORG\" and 3=\"LOC\")\n* the 'dataset' column identifies the row's original dataset (if you wish to apply filters to it)\n\n\nSplit\n-----\n\n\n* 'train' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'pii-masking-200k' + 'wikiann' + 'wikiner'\n* 'validation' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiann' + 'wikiner'\n* 'test' corresponds to the concatenation of 'multiconer' + 'multinerd' + 'wikiann' + 'wikiner'\n\n\ns### multiconer### multinerd### pii-masking-200k### wikiann### wikiner### frenchNER\\_3entities\n\n\nLicense\n=======\n\n\ncc-by-4.0"
]
|
b3c7b8186043d8fa33e4af87e81f881c616d99dc | # Dataset Card for "NCBI-Disease_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hippocrates/NCBI-Disease_test | [
"region:us"
]
| 2023-11-13T19:28:20+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "label", "sequence": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3867679, "num_examples": 5433}, {"name": "valid", "num_bytes": 669529, "num_examples": 924}, {"name": "test", "num_bytes": 686820, "num_examples": 941}], "download_size": 1489707, "dataset_size": 5224028}} | 2023-11-13T19:28:23+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "NCBI-Disease_test"
More Information needed | [
"# Dataset Card for \"NCBI-Disease_test\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"NCBI-Disease_test\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"NCBI-Disease_test\"\n\nMore Information needed"
]
|
9687fc59d6a6cfa2a7da08ff737fcf4336b5e13c | # Dataset Card for "NCBI-Disease_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hippocrates/NCBI-Disease_train | [
"region:us"
]
| 2023-11-13T19:28:36+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4368435, "num_examples": 5433}, {"name": "valid", "num_bytes": 750884, "num_examples": 924}, {"name": "test", "num_bytes": 768153, "num_examples": 941}], "download_size": 1765072, "dataset_size": 5887472}} | 2023-11-13T19:28:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "NCBI-Disease_train"
More Information needed | [
"# Dataset Card for \"NCBI-Disease_train\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"NCBI-Disease_train\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"NCBI-Disease_train\"\n\nMore Information needed"
]
|
1ddfabd20e8b20f6234968ae87983e59357fd653 | # Dataset Card for "clinical_ner_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jiminHuang/clinical_ner_train | [
"region:us"
]
| 2023-11-13T19:50:16+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8527346, "num_examples": 10661}, {"name": "valid", "num_bytes": 4951299, "num_examples": 6254}, {"name": "test", "num_bytes": 5307591, "num_examples": 6806}], "download_size": 5455050, "dataset_size": 18786236}} | 2023-11-13T19:50:24+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "clinical_ner_train"
More Information needed | [
"# Dataset Card for \"clinical_ner_train\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"clinical_ner_train\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"clinical_ner_train\"\n\nMore Information needed"
]
|
52c97a6b9af86c57a96d828d30c6d2940d20de33 | # Dataset Card for "clinical_ner_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hippocrates/clinical_ner_train | [
"region:us"
]
| 2023-11-13T19:52:44+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8527346, "num_examples": 10661}, {"name": "valid", "num_bytes": 4951299, "num_examples": 6254}, {"name": "test", "num_bytes": 5307591, "num_examples": 6806}], "download_size": 5455050, "dataset_size": 18786236}} | 2023-11-13T19:52:52+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "clinical_ner_train"
More Information needed | [
"# Dataset Card for \"clinical_ner_train\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"clinical_ner_train\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"clinical_ner_train\"\n\nMore Information needed"
]
|
d683fe91a87c3621a5225635280a2d57812f5295 |
If you want a small subset of this dataset, there is [histogram-comparisons-small-v1](https://huggingface.co/datasets/neoneye/histogram-comparisons-small-v1) with 150k rows.
This dataset contains 3000000 items in total. There are 3 curriculums each containing 1000000 items.
Each item is a markdown document.
Each item contains between 2 and 6 image comparisons, with a `Summary` at the bottom.
The images are between 3x3 and 14x14.
The markdown document contains a `## Response`, that separates the prompt from the answer.
The structure of the markdown document with 3 comparisons: A, B, C.
```
# Histogram comparisons with summary
## Data A
### Data left
### Data right
## Data B
### Data left
### Data right
## Data C
### Data left
### Data right
## Response
## Compare A
## Compare B
## Compare C
## Summary
``` | neoneye/histogram-comparisons-v1 | [
"task_categories:image-to-text",
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"region:us"
]
| 2023-11-13T19:54:56+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["1M<n<10M"], "task_categories": ["image-to-text"]} | 2024-01-03T12:29:35+00:00 | []
| [
"en"
]
| TAGS
#task_categories-image-to-text #size_categories-1M<n<10M #language-English #license-mit #region-us
|
If you want a small subset of this dataset, there is histogram-comparisons-small-v1 with 150k rows.
This dataset contains 3000000 items in total. There are 3 curriculums each containing 1000000 items.
Each item is a markdown document.
Each item contains between 2 and 6 image comparisons, with a 'Summary' at the bottom.
The images are between 3x3 and 14x14.
The markdown document contains a '## Response', that separates the prompt from the answer.
The structure of the markdown document with 3 comparisons: A, B, C.
| [
"## Response', that separates the prompt from the answer.\n\nThe structure of the markdown document with 3 comparisons: A, B, C."
]
| [
"TAGS\n#task_categories-image-to-text #size_categories-1M<n<10M #language-English #license-mit #region-us \n",
"## Response', that separates the prompt from the answer.\n\nThe structure of the markdown document with 3 comparisons: A, B, C."
]
| [
39,
31
]
| [
"passage: TAGS\n#task_categories-image-to-text #size_categories-1M<n<10M #language-English #license-mit #region-us \n## Response', that separates the prompt from the answer.\n\nThe structure of the markdown document with 3 comparisons: A, B, C."
]
|
ee3bd3c0a7e953203974055dd02a7b45c087cb9a |
# Dataset Card for Evaluation run of jondurbin/airoboros-m-7b-3.1.2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [jondurbin/airoboros-m-7b-3.1.2](https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_jondurbin__airoboros-m-7b-3.1.2_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-13T19:52:08.394828](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-m-7b-3.1.2_public/blob/main/results_2023-11-13T19-52-08.394828.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6135698931222068,
"acc_stderr": 0.032663709384362964,
"acc_norm": 0.6227233835131805,
"acc_norm_stderr": 0.033389385625867025,
"mc1": 0.3623011015911873,
"mc1_stderr": 0.016826646897262258,
"mc2": 0.5374874453696802,
"mc2_stderr": 0.015091604419760369,
"em": 0.352873322147651,
"em_stderr": 0.004893771826676391,
"f1": 0.41195889261745017,
"f1_stderr": 0.004738382745022343
},
"harness|arc:challenge|25": {
"acc": 0.5921501706484642,
"acc_stderr": 0.014361097288449696,
"acc_norm": 0.6186006825938567,
"acc_norm_stderr": 0.014194389086685256
},
"harness|hellaswag|10": {
"acc": 0.6340370444134634,
"acc_stderr": 0.0048071469251620555,
"acc_norm": 0.8350926110336586,
"acc_norm_stderr": 0.0037033852685121726
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5851851851851851,
"acc_stderr": 0.04256193767901408,
"acc_norm": 0.5851851851851851,
"acc_norm_stderr": 0.04256193767901408
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6447368421052632,
"acc_stderr": 0.03894734487013317,
"acc_norm": 0.6447368421052632,
"acc_norm_stderr": 0.03894734487013317
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6641509433962264,
"acc_stderr": 0.02906722014664483,
"acc_norm": 0.6641509433962264,
"acc_norm_stderr": 0.02906722014664483
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6805555555555556,
"acc_stderr": 0.03899073687357335,
"acc_norm": 0.6805555555555556,
"acc_norm_stderr": 0.03899073687357335
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6069364161849711,
"acc_stderr": 0.0372424959581773,
"acc_norm": 0.6069364161849711,
"acc_norm_stderr": 0.0372424959581773
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.45098039215686275,
"acc_stderr": 0.04951218252396264,
"acc_norm": 0.45098039215686275,
"acc_norm_stderr": 0.04951218252396264
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5276595744680851,
"acc_stderr": 0.03263597118409769,
"acc_norm": 0.5276595744680851,
"acc_norm_stderr": 0.03263597118409769
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.593103448275862,
"acc_stderr": 0.04093793981266236,
"acc_norm": 0.593103448275862,
"acc_norm_stderr": 0.04093793981266236
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3941798941798942,
"acc_stderr": 0.02516798233389414,
"acc_norm": 0.3941798941798942,
"acc_norm_stderr": 0.02516798233389414
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.04375888492727061,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.04375888492727061
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7225806451612903,
"acc_stderr": 0.025470196835900055,
"acc_norm": 0.7225806451612903,
"acc_norm_stderr": 0.025470196835900055
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.035176035403610084,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.035176035403610084
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7212121212121212,
"acc_stderr": 0.03501438706296781,
"acc_norm": 0.7212121212121212,
"acc_norm_stderr": 0.03501438706296781
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8131313131313131,
"acc_stderr": 0.027772533334218967,
"acc_norm": 0.8131313131313131,
"acc_norm_stderr": 0.027772533334218967
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8601036269430051,
"acc_stderr": 0.025033870583015184,
"acc_norm": 0.8601036269430051,
"acc_norm_stderr": 0.025033870583015184
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6256410256410256,
"acc_stderr": 0.024537591572830503,
"acc_norm": 0.6256410256410256,
"acc_norm_stderr": 0.024537591572830503
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3111111111111111,
"acc_stderr": 0.028226446749683522,
"acc_norm": 0.3111111111111111,
"acc_norm_stderr": 0.028226446749683522
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6386554621848739,
"acc_stderr": 0.03120469122515002,
"acc_norm": 0.6386554621848739,
"acc_norm_stderr": 0.03120469122515002
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242742,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242742
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8256880733944955,
"acc_stderr": 0.016265675632010347,
"acc_norm": 0.8256880733944955,
"acc_norm_stderr": 0.016265675632010347
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.03400603625538272,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.03400603625538272
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7990196078431373,
"acc_stderr": 0.028125972265654373,
"acc_norm": 0.7990196078431373,
"acc_norm_stderr": 0.028125972265654373
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7510548523206751,
"acc_stderr": 0.028146970599422644,
"acc_norm": 0.7510548523206751,
"acc_norm_stderr": 0.028146970599422644
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.672645739910314,
"acc_stderr": 0.031493846709941306,
"acc_norm": 0.672645739910314,
"acc_norm_stderr": 0.031493846709941306
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7480916030534351,
"acc_stderr": 0.03807387116306085,
"acc_norm": 0.7480916030534351,
"acc_norm_stderr": 0.03807387116306085
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.743801652892562,
"acc_stderr": 0.03984979653302872,
"acc_norm": 0.743801652892562,
"acc_norm_stderr": 0.03984979653302872
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.042844679680521934,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.042844679680521934
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4107142857142857,
"acc_stderr": 0.04669510663875191,
"acc_norm": 0.4107142857142857,
"acc_norm_stderr": 0.04669510663875191
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.03760178006026621,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.03760178006026621
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8589743589743589,
"acc_stderr": 0.022801382534597528,
"acc_norm": 0.8589743589743589,
"acc_norm_stderr": 0.022801382534597528
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8109833971902938,
"acc_stderr": 0.014000791294407006,
"acc_norm": 0.8109833971902938,
"acc_norm_stderr": 0.014000791294407006
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7254335260115607,
"acc_stderr": 0.024027745155265012,
"acc_norm": 0.7254335260115607,
"acc_norm_stderr": 0.024027745155265012
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.014333522059217892,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.014333522059217892
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7156862745098039,
"acc_stderr": 0.025829163272757482,
"acc_norm": 0.7156862745098039,
"acc_norm_stderr": 0.025829163272757482
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6720257234726688,
"acc_stderr": 0.026664410886937617,
"acc_norm": 0.6720257234726688,
"acc_norm_stderr": 0.026664410886937617
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7253086419753086,
"acc_stderr": 0.024836057868294674,
"acc_norm": 0.7253086419753086,
"acc_norm_stderr": 0.024836057868294674
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4397163120567376,
"acc_stderr": 0.029609912075594106,
"acc_norm": 0.4397163120567376,
"acc_norm_stderr": 0.029609912075594106
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4445893089960887,
"acc_stderr": 0.012691575792657117,
"acc_norm": 0.4445893089960887,
"acc_norm_stderr": 0.012691575792657117
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.02833295951403121,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.02833295951403121
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6470588235294118,
"acc_stderr": 0.01933314202079716,
"acc_norm": 0.6470588235294118,
"acc_norm_stderr": 0.01933314202079716
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6816326530612244,
"acc_stderr": 0.029822533793982066,
"acc_norm": 0.6816326530612244,
"acc_norm_stderr": 0.029822533793982066
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7960199004975125,
"acc_stderr": 0.02849317624532607,
"acc_norm": 0.7960199004975125,
"acc_norm_stderr": 0.02849317624532607
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.88,
"acc_stderr": 0.032659863237109066,
"acc_norm": 0.88,
"acc_norm_stderr": 0.032659863237109066
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4939759036144578,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.4939759036144578,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3623011015911873,
"mc1_stderr": 0.016826646897262258,
"mc2": 0.5374874453696802,
"mc2_stderr": 0.015091604419760369
},
"harness|winogrande|5": {
"acc": 0.7758484609313339,
"acc_stderr": 0.011720400740774108
},
"harness|drop|3": {
"em": 0.352873322147651,
"em_stderr": 0.004893771826676391,
"f1": 0.41195889261745017,
"f1_stderr": 0.004738382745022343
},
"harness|gsm8k|5": {
"acc": 0.13874147081122062,
"acc_stderr": 0.009521649920798146
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_jondurbin__airoboros-m-7b-3.1.2 | [
"region:us"
]
| 2023-11-13T19:55:07+00:00 | {"pretty_name": "Evaluation run of jondurbin/airoboros-m-7b-3.1.2", "dataset_summary": "Dataset automatically created during the evaluation run of model [jondurbin/airoboros-m-7b-3.1.2](https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jondurbin__airoboros-m-7b-3.1.2_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-13T19:52:08.394828](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-m-7b-3.1.2_public/blob/main/results_2023-11-13T19-52-08.394828.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6135698931222068,\n \"acc_stderr\": 0.032663709384362964,\n \"acc_norm\": 0.6227233835131805,\n \"acc_norm_stderr\": 0.033389385625867025,\n \"mc1\": 0.3623011015911873,\n \"mc1_stderr\": 0.016826646897262258,\n \"mc2\": 0.5374874453696802,\n \"mc2_stderr\": 0.015091604419760369,\n \"em\": 0.352873322147651,\n \"em_stderr\": 0.004893771826676391,\n \"f1\": 0.41195889261745017,\n \"f1_stderr\": 0.004738382745022343\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5921501706484642,\n \"acc_stderr\": 0.014361097288449696,\n \"acc_norm\": 0.6186006825938567,\n \"acc_norm_stderr\": 0.014194389086685256\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6340370444134634,\n \"acc_stderr\": 0.0048071469251620555,\n \"acc_norm\": 0.8350926110336586,\n \"acc_norm_stderr\": 0.0037033852685121726\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n \"acc_stderr\": 0.04256193767901408,\n \"acc_norm\": 0.5851851851851851,\n \"acc_norm_stderr\": 0.04256193767901408\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6447368421052632,\n \"acc_stderr\": 0.03894734487013317,\n \"acc_norm\": 0.6447368421052632,\n \"acc_norm_stderr\": 0.03894734487013317\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6641509433962264,\n \"acc_stderr\": 0.02906722014664483,\n \"acc_norm\": 0.6641509433962264,\n \"acc_norm_stderr\": 0.02906722014664483\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6805555555555556,\n \"acc_stderr\": 0.03899073687357335,\n \"acc_norm\": 0.6805555555555556,\n \"acc_norm_stderr\": 0.03899073687357335\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6069364161849711,\n \"acc_stderr\": 0.0372424959581773,\n \"acc_norm\": 0.6069364161849711,\n \"acc_norm_stderr\": 0.0372424959581773\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.45098039215686275,\n \"acc_stderr\": 0.04951218252396264,\n \"acc_norm\": 0.45098039215686275,\n \"acc_norm_stderr\": 0.04951218252396264\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5276595744680851,\n \"acc_stderr\": 0.03263597118409769,\n \"acc_norm\": 0.5276595744680851,\n \"acc_norm_stderr\": 0.03263597118409769\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n \"acc_norm\": 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3941798941798942,\n \"acc_stderr\": 0.02516798233389414,\n \"acc_norm\": 0.3941798941798942,\n \"acc_norm_stderr\": 0.02516798233389414\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n \"acc_stderr\": 0.04375888492727061,\n \"acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.04375888492727061\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7225806451612903,\n \"acc_stderr\": 0.025470196835900055,\n \"acc_norm\": 0.7225806451612903,\n \"acc_norm_stderr\": 0.025470196835900055\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.035176035403610084,\n \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.035176035403610084\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7212121212121212,\n \"acc_stderr\": 0.03501438706296781,\n \"acc_norm\": 0.7212121212121212,\n \"acc_norm_stderr\": 0.03501438706296781\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8131313131313131,\n \"acc_stderr\": 0.027772533334218967,\n \"acc_norm\": 0.8131313131313131,\n \"acc_norm_stderr\": 0.027772533334218967\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8601036269430051,\n \"acc_stderr\": 0.025033870583015184,\n \"acc_norm\": 0.8601036269430051,\n \"acc_norm_stderr\": 0.025033870583015184\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6256410256410256,\n \"acc_stderr\": 0.024537591572830503,\n \"acc_norm\": 0.6256410256410256,\n \"acc_norm_stderr\": 0.024537591572830503\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3111111111111111,\n \"acc_stderr\": 0.028226446749683522,\n \"acc_norm\": 0.3111111111111111,\n \"acc_norm_stderr\": 0.028226446749683522\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6386554621848739,\n \"acc_stderr\": 0.03120469122515002,\n \"acc_norm\": 0.6386554621848739,\n \"acc_norm_stderr\": 0.03120469122515002\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8256880733944955,\n \"acc_stderr\": 0.016265675632010347,\n \"acc_norm\": 0.8256880733944955,\n \"acc_norm_stderr\": 0.016265675632010347\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5370370370370371,\n \"acc_stderr\": 0.03400603625538272,\n \"acc_norm\": 0.5370370370370371,\n \"acc_norm_stderr\": 0.03400603625538272\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7990196078431373,\n \"acc_stderr\": 0.028125972265654373,\n \"acc_norm\": 0.7990196078431373,\n \"acc_norm_stderr\": 0.028125972265654373\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7510548523206751,\n \"acc_stderr\": 0.028146970599422644,\n \"acc_norm\": 0.7510548523206751,\n \"acc_norm_stderr\": 0.028146970599422644\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.672645739910314,\n \"acc_stderr\": 0.031493846709941306,\n \"acc_norm\": 0.672645739910314,\n \"acc_norm_stderr\": 0.031493846709941306\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7480916030534351,\n \"acc_stderr\": 0.03807387116306085,\n \"acc_norm\": 0.7480916030534351,\n \"acc_norm_stderr\": 0.03807387116306085\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.743801652892562,\n \"acc_stderr\": 0.03984979653302872,\n \"acc_norm\": 0.743801652892562,\n \"acc_norm_stderr\": 0.03984979653302872\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.042844679680521934,\n \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.042844679680521934\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4107142857142857,\n \"acc_stderr\": 0.04669510663875191,\n \"acc_norm\": 0.4107142857142857,\n \"acc_norm_stderr\": 0.04669510663875191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.03760178006026621,\n \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.03760178006026621\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n \"acc_stderr\": 0.022801382534597528,\n \"acc_norm\": 0.8589743589743589,\n \"acc_norm_stderr\": 0.022801382534597528\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8109833971902938,\n \"acc_stderr\": 0.014000791294407006,\n \"acc_norm\": 0.8109833971902938,\n \"acc_norm_stderr\": 0.014000791294407006\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7254335260115607,\n \"acc_stderr\": 0.024027745155265012,\n \"acc_norm\": 0.7254335260115607,\n \"acc_norm_stderr\": 0.024027745155265012\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n \"acc_stderr\": 0.014333522059217892,\n \"acc_norm\": 0.2424581005586592,\n \"acc_norm_stderr\": 0.014333522059217892\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7156862745098039,\n \"acc_stderr\": 0.025829163272757482,\n \"acc_norm\": 0.7156862745098039,\n \"acc_norm_stderr\": 0.025829163272757482\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6720257234726688,\n \"acc_stderr\": 0.026664410886937617,\n \"acc_norm\": 0.6720257234726688,\n \"acc_norm_stderr\": 0.026664410886937617\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7253086419753086,\n \"acc_stderr\": 0.024836057868294674,\n \"acc_norm\": 0.7253086419753086,\n \"acc_norm_stderr\": 0.024836057868294674\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4397163120567376,\n \"acc_stderr\": 0.029609912075594106,\n \"acc_norm\": 0.4397163120567376,\n \"acc_norm_stderr\": 0.029609912075594106\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4445893089960887,\n \"acc_stderr\": 0.012691575792657117,\n \"acc_norm\": 0.4445893089960887,\n \"acc_norm_stderr\": 0.012691575792657117\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.02833295951403121,\n \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.02833295951403121\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6470588235294118,\n \"acc_stderr\": 0.01933314202079716,\n \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.01933314202079716\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6816326530612244,\n \"acc_stderr\": 0.029822533793982066,\n \"acc_norm\": 0.6816326530612244,\n \"acc_norm_stderr\": 0.029822533793982066\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7960199004975125,\n \"acc_stderr\": 0.02849317624532607,\n \"acc_norm\": 0.7960199004975125,\n \"acc_norm_stderr\": 0.02849317624532607\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.032659863237109066,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.032659863237109066\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.4939759036144578,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3623011015911873,\n \"mc1_stderr\": 0.016826646897262258,\n \"mc2\": 0.5374874453696802,\n \"mc2_stderr\": 0.015091604419760369\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7758484609313339,\n \"acc_stderr\": 0.011720400740774108\n },\n \"harness|drop|3\": {\n \"em\": 0.352873322147651,\n \"em_stderr\": 0.004893771826676391,\n \"f1\": 0.41195889261745017,\n \"f1_stderr\": 0.004738382745022343\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.13874147081122062,\n \"acc_stderr\": 0.009521649920798146\n }\n}\n```", "repo_url": "https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|arc:challenge|25_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|drop|3_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|gsm8k|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hellaswag|10_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-13T19-52-08.394828.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["**/details_harness|winogrande|5_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-13T19-52-08.394828.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_13T19_52_08.394828", "path": ["results_2023-11-13T19-52-08.394828.parquet"]}, {"split": "latest", "path": ["results_2023-11-13T19-52-08.394828.parquet"]}]}]} | 2023-11-13T19:55:52+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of jondurbin/airoboros-m-7b-3.1.2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model jondurbin/airoboros-m-7b-3.1.2 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-13T19:52:08.394828(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of jondurbin/airoboros-m-7b-3.1.2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model jondurbin/airoboros-m-7b-3.1.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T19:52:08.394828(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of jondurbin/airoboros-m-7b-3.1.2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model jondurbin/airoboros-m-7b-3.1.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-13T19:52:08.394828(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
23,
31,
172,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of jondurbin/airoboros-m-7b-3.1.2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model jondurbin/airoboros-m-7b-3.1.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-13T19:52:08.394828(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
392f3216f06067929f44f89ad95a10af06861c33 | # Dataset Card for "AGENT_V3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | GokhanAI/Synthetic | [
"region:us"
]
| 2023-11-13T20:01:25+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15850263.160279274, "num_examples": 24354}, {"name": "test", "num_bytes": 1301655.8397207255, "num_examples": 2000}], "download_size": 5685426, "dataset_size": 17151919.0}} | 2023-11-13T20:12:22+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "AGENT_V3"
More Information needed | [
"# Dataset Card for \"AGENT_V3\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"AGENT_V3\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"AGENT_V3\"\n\nMore Information needed"
]
|
21cd2df4c2f3e68ebb8f9da2902f30834ced630f | # Dataset Card for "ds-11-13-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Chunt0/ds-11-13-v1 | [
"region:us"
]
| 2023-11-13T20:15:03+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6693034.0, "num_examples": 29}], "download_size": 6686442, "dataset_size": 6693034.0}} | 2023-11-13T20:15:15+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ds-11-13-v1"
More Information needed | [
"# Dataset Card for \"ds-11-13-v1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ds-11-13-v1\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ds-11-13-v1\"\n\nMore Information needed"
]
|
4ba4baceacf9fe17fd023fe879d782fc7d0d8dc9 | # Dataset Card for "80da83b2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | result-kand2-sdxl-wuerst-karlo/80da83b2 | [
"region:us"
]
| 2023-11-13T20:26:14+00:00 | {"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 219, "num_examples": 10}], "download_size": 1431, "dataset_size": 219}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T20:26:14+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "80da83b2"
More Information needed | [
"# Dataset Card for \"80da83b2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"80da83b2\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"80da83b2\"\n\nMore Information needed"
]
|
eea48db19d32eecaacd55b45a736ce2519992867 | # Dataset Card for "bd8b4914"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | result-kand2-sdxl-wuerst-karlo/bd8b4914 | [
"region:us"
]
| 2023-11-13T20:28:26+00:00 | {"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 249, "num_examples": 10}], "download_size": 1433, "dataset_size": 249}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T20:28:27+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bd8b4914"
More Information needed | [
"# Dataset Card for \"bd8b4914\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bd8b4914\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bd8b4914\"\n\nMore Information needed"
]
|
ba6f133aafa044a0ee31a7e5b0bdcdaf2f27b328 | # Dataset Card for "60b7339d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | result-kand2-sdxl-wuerst-karlo/60b7339d | [
"region:us"
]
| 2023-11-13T20:35:21+00:00 | {"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 231, "num_examples": 10}], "download_size": 1418, "dataset_size": 231}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T20:35:22+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "60b7339d"
More Information needed | [
"# Dataset Card for \"60b7339d\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"60b7339d\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"60b7339d\"\n\nMore Information needed"
]
|
b607333b93361ae575e91d2d63c33a2d795ad733 | # Dataset Card for "7fb2a617"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | result-kand2-sdxl-wuerst-karlo/7fb2a617 | [
"region:us"
]
| 2023-11-13T20:35:24+00:00 | {"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 231, "num_examples": 10}], "download_size": 1418, "dataset_size": 231}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T20:35:25+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "7fb2a617"
More Information needed | [
"# Dataset Card for \"7fb2a617\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"7fb2a617\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"7fb2a617\"\n\nMore Information needed"
]
|
524d2b8cfb1967364d953fc8a6bd91ab3213de60 | # Dataset Card for "muppetshow-blip-captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Norod78/muppetshow-blip-captions | [
"region:us"
]
| 2023-11-13T20:39:01+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 495380043.0, "num_examples": 402}], "download_size": 495385822, "dataset_size": 495380043.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T20:39:50+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "muppetshow-blip-captions"
More Information needed | [
"# Dataset Card for \"muppetshow-blip-captions\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"muppetshow-blip-captions\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"muppetshow-blip-captions\"\n\nMore Information needed"
]
|
a265fa0cce514063a284c126a4bc850507ba0691 | # Dataset Card for coachingllm
<!-- Provide a quick summary of the dataset. -->
Collection of Coaching questions.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Fabian Celik
- **Language(s) (NLP):** en
- **License:** apache-2.0 | fabiancelik/coachingllm | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"coaching",
"questions",
"region:us"
]
| 2023-11-13T20:39:33+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["question-answering", "text-generation", "conversational"], "tags": ["coaching", "questions"]} | 2023-11-19T17:38:14+00:00 | []
| [
"en"
]
| TAGS
#task_categories-question-answering #task_categories-text-generation #task_categories-conversational #size_categories-n<1K #language-English #license-apache-2.0 #coaching #questions #region-us
| # Dataset Card for coachingllm
Collection of Coaching questions.
## Dataset Details
### Dataset Description
- Curated by: Fabian Celik
- Language(s) (NLP): en
- License: apache-2.0 | [
"# Dataset Card for coachingllm\n\n\n\nCollection of Coaching questions.",
"## Dataset Details",
"### Dataset Description\n\n\n\n- Curated by: Fabian Celik\n- Language(s) (NLP): en\n- License: apache-2.0"
]
| [
"TAGS\n#task_categories-question-answering #task_categories-text-generation #task_categories-conversational #size_categories-n<1K #language-English #license-apache-2.0 #coaching #questions #region-us \n",
"# Dataset Card for coachingllm\n\n\n\nCollection of Coaching questions.",
"## Dataset Details",
"### Dataset Description\n\n\n\n- Curated by: Fabian Celik\n- Language(s) (NLP): en\n- License: apache-2.0"
]
| [
67,
13,
4,
31
]
| [
"passage: TAGS\n#task_categories-question-answering #task_categories-text-generation #task_categories-conversational #size_categories-n<1K #language-English #license-apache-2.0 #coaching #questions #region-us \n# Dataset Card for coachingllm\n\n\n\nCollection of Coaching questions.## Dataset Details### Dataset Description\n\n\n\n- Curated by: Fabian Celik\n- Language(s) (NLP): en\n- License: apache-2.0"
]
|
797b732cfcd9754cea71e5c83a50e7dbe5971180 | # Dataset Card for "train_thumbnails2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Aspik101/train_thumbnails2 | [
"region:us"
]
| 2023-11-13T20:51:13+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3646466821.0, "num_examples": 513}], "download_size": 3646525869, "dataset_size": 3646466821.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T21:18:19+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "train_thumbnails2"
More Information needed | [
"# Dataset Card for \"train_thumbnails2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"train_thumbnails2\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"train_thumbnails2\"\n\nMore Information needed"
]
|
5cbe6693c34b89d7d4cd61c76bd17e9fb855d884 | # Dataset Card for "fashion-decade-images-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tonyassi/fashion-decade-images-1 | [
"region:us"
]
| 2023-11-13T20:51:30+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "1910s", "1": "1920s", "2": "1930s", "3": "1940s", "4": "1950s", "5": "1960s", "6": "1970s", "7": "1980s", "8": "1990s", "9": "2000s"}}}}], "splits": [{"name": "train", "num_bytes": 726974830.376, "num_examples": 2504}], "download_size": 726313441, "dataset_size": 726974830.376}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T21:08:46+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "fashion-decade-images-1"
More Information needed | [
"# Dataset Card for \"fashion-decade-images-1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"fashion-decade-images-1\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"fashion-decade-images-1\"\n\nMore Information needed"
]
|
4b1cc7c94fe65367ef24fb7744d71582403f1556 | # Dataset Card for "alt_fantasy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mickume/alt_fantasy | [
"region:us"
]
| 2023-11-13T21:45:52+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 653833080, "num_examples": 3488817}], "download_size": 402691207, "dataset_size": 653833080}} | 2023-11-13T22:10:08+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "alt_fantasy"
More Information needed | [
"# Dataset Card for \"alt_fantasy\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"alt_fantasy\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"alt_fantasy\"\n\nMore Information needed"
]
|
2f0ef86594a44c4e06290d8e6aacb89f4897e480 | Condensed version of the meta-math dataset
arxiv.org/abs/2309.12284
View the project page:
https://meta-math.github.io/
# Citation
```bibtex
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
journal={arXiv preprint arXiv:2309.12284},
year={2023}
}
``` | cxllin/minimath | [
"license:mit",
"math",
"math-qa",
"arxiv:2309.12284",
"region:us"
]
| 2023-11-13T21:47:54+00:00 | {"license": "mit", "tags": ["math", "math-qa"]} | 2023-11-13T21:51:31+00:00 | [
"2309.12284"
]
| []
| TAGS
#license-mit #math #math-qa #arxiv-2309.12284 #region-us
| Condensed version of the meta-math dataset
URL
View the project page:
URL
| []
| [
"TAGS\n#license-mit #math #math-qa #arxiv-2309.12284 #region-us \n"
]
| [
25
]
| [
"passage: TAGS\n#license-mit #math #math-qa #arxiv-2309.12284 #region-us \n"
]
|
3c6b232f7fe8f8325143eb19554111562423d768 |
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/Tulu%20V2%20banner.png" alt="TuluV2 banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Dataset Card for Tulu V2 Mix
*Note the [ODC-BY license](https://opendatacommons.org/licenses/by/1-0/), indicating that different licenses apply to subsets of the data. This means that some portions of the dataset are non-commercial. We present the mixture as a research artifact.*
Tulu is a series of language models that are trained to act as helpful assistants.
The dataset consists of a mix of :
* [FLAN](https://github.com/google-research/FLAN/tree/main) (Apache 2.0): We use 50,000 examples sampled from FLAN v2. To emphasize CoT-style reasoning, we sample another 50,000 examples from the CoT
subset of the FLAN v2 mixture.
* [Open Assistant 1](https://huggingface.co/datasets/OpenAssistant/oasst1) (Apache 2.0): We isolate the highest-scoring paths in each conversation tree and use these samples, resulting in 7,708 examples.
* [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) (Apache 2.0 listed, no official repo found): We use all 114,046 from our processed ShareGPT dataset, as we found ShareGPT gave strong performance in prior work.
* [GPT4-Alpaca](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM#data-release) (CC By NC 4.0):We sample 20,000 samples from GPT-4 Alpaca to further include distilled GPT-4 data.
* [Code-Alpaca](https://github.com/sahil280114/codealpaca) (CC By NC 4.0):We use all 20,022 examples from Code Alpaca, following our prior V1 mixture, in order to improve model code abilities.
* [LIMA](https://huggingface.co/datasets/GAIR/lima) (CC BY-NC-SA): We use 1,030 examples from LIMA as an example of carefully curated data.
* [WizardLM Evol Instruct](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) (No license provided): We subsample 30,000 examples from WizardLM, which contains distilled data of increasing diversity and complexity.
* [Open-Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca) (MIT): We sample 30,000 samples generated by GPT-4 from OpenOrca, a reproduction of Orca Mukherjee et al., 2023, which augments FLAN data with additional model-generated explanations
* Hardcoded: A collection of prompts such as `Tell me about yourself' such that the model generates correct outputs given inquiries about its name or developers. We wrote 14 samples and repeat each sample 10 times in the mixture, resulting in 140 total samples.
* Science: 7,544 examples from a mixture of scientific document understand tasks—including question answering, fact-checking, summarization, and information extraction (under development, standalone release soon).
These are made by taking either just the training set of the subsets or the entire section if no splits are present.
Tulu V2 is presented as a singular training split.
[Tulu V2 DPO 70B](https://huggingface.co/allenai/tulu-2-dpo-70b), and is a fine-tuned version of Llama 2 that was trained on on a mix of publicly available, synthetic and human datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290).
**Model Family:** Other models and the dataset are found in the [Tulu V2 collection](https://huggingface.co/collections/allenai/tulu-v2-suite-6551b56e743e6349aab45101).
The length distribution of the dataset can be seen below:
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/length_histogram_v2.png" alt="TuluV2 histogram" width="600" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Tulu V1 Mix can be found [here](https://huggingface.co/datasets/allenai/tulu-v1).
### Science data note
The included science data is from the following categories:
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/science_data.png" alt="TuluV2 science data mix" width="600" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Note that some of the examples include an off-by-one error in the sentence indexing that had a small or negligible impact on performance.
This was found during testing and will be updated in future versions, with the detailed release of the dataset artifact itself coming in a future release.
### License
We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound by the [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/) in respect of the content contained in the dataset.
| allenai/tulu-v2-sft-mixture | [
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:odc-by",
"arxiv:2305.18290",
"region:us"
]
| 2023-11-13T21:56:34+00:00 | {"language": ["en"], "license": "odc-by", "size_categories": ["100K<n<1M"], "task_categories": ["question-answering", "conversational", "text-generation"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "dataset", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "messages", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1239293363, "num_examples": 326154}], "download_size": 554561769, "dataset_size": 1239293363}} | 2023-11-29T06:53:43+00:00 | [
"2305.18290"
]
| [
"en"
]
| TAGS
#task_categories-question-answering #task_categories-conversational #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-odc-by #arxiv-2305.18290 #region-us
|
<img src="URL alt="TuluV2 banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Dataset Card for Tulu V2 Mix
*Note the ODC-BY license, indicating that different licenses apply to subsets of the data. This means that some portions of the dataset are non-commercial. We present the mixture as a research artifact.*
Tulu is a series of language models that are trained to act as helpful assistants.
The dataset consists of a mix of :
* FLAN (Apache 2.0): We use 50,000 examples sampled from FLAN v2. To emphasize CoT-style reasoning, we sample another 50,000 examples from the CoT
subset of the FLAN v2 mixture.
* Open Assistant 1 (Apache 2.0): We isolate the highest-scoring paths in each conversation tree and use these samples, resulting in 7,708 examples.
* ShareGPT (Apache 2.0 listed, no official repo found): We use all 114,046 from our processed ShareGPT dataset, as we found ShareGPT gave strong performance in prior work.
* GPT4-Alpaca (CC By NC 4.0):We sample 20,000 samples from GPT-4 Alpaca to further include distilled GPT-4 data.
* Code-Alpaca (CC By NC 4.0):We use all 20,022 examples from Code Alpaca, following our prior V1 mixture, in order to improve model code abilities.
* LIMA (CC BY-NC-SA): We use 1,030 examples from LIMA as an example of carefully curated data.
* WizardLM Evol Instruct (No license provided): We subsample 30,000 examples from WizardLM, which contains distilled data of increasing diversity and complexity.
* Open-Orca (MIT): We sample 30,000 samples generated by GPT-4 from OpenOrca, a reproduction of Orca Mukherjee et al., 2023, which augments FLAN data with additional model-generated explanations
* Hardcoded: A collection of prompts such as 'Tell me about yourself' such that the model generates correct outputs given inquiries about its name or developers. We wrote 14 samples and repeat each sample 10 times in the mixture, resulting in 140 total samples.
* Science: 7,544 examples from a mixture of scientific document understand tasks—including question answering, fact-checking, summarization, and information extraction (under development, standalone release soon).
These are made by taking either just the training set of the subsets or the entire section if no splits are present.
Tulu V2 is presented as a singular training split.
Tulu V2 DPO 70B, and is a fine-tuned version of Llama 2 that was trained on on a mix of publicly available, synthetic and human datasets using Direct Preference Optimization (DPO).
Model Family: Other models and the dataset are found in the Tulu V2 collection.
The length distribution of the dataset can be seen below:
<img src="URL alt="TuluV2 histogram" width="600" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Tulu V1 Mix can be found here.
### Science data note
The included science data is from the following categories:
<img src="URL alt="TuluV2 science data mix" width="600" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Note that some of the examples include an off-by-one error in the sentence indexing that had a small or negligible impact on performance.
This was found during testing and will be updated in future versions, with the detailed release of the dataset artifact itself coming in a future release.
### License
We are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
| [
"# Dataset Card for Tulu V2 Mix\n\n*Note the ODC-BY license, indicating that different licenses apply to subsets of the data. This means that some portions of the dataset are non-commercial. We present the mixture as a research artifact.*\n\nTulu is a series of language models that are trained to act as helpful assistants. \nThe dataset consists of a mix of :\n* FLAN (Apache 2.0): We use 50,000 examples sampled from FLAN v2. To emphasize CoT-style reasoning, we sample another 50,000 examples from the CoT\nsubset of the FLAN v2 mixture.\n* Open Assistant 1 (Apache 2.0): We isolate the highest-scoring paths in each conversation tree and use these samples, resulting in 7,708 examples.\n* ShareGPT (Apache 2.0 listed, no official repo found): We use all 114,046 from our processed ShareGPT dataset, as we found ShareGPT gave strong performance in prior work.\n* GPT4-Alpaca (CC By NC 4.0):We sample 20,000 samples from GPT-4 Alpaca to further include distilled GPT-4 data.\n* Code-Alpaca (CC By NC 4.0):We use all 20,022 examples from Code Alpaca, following our prior V1 mixture, in order to improve model code abilities.\n* LIMA (CC BY-NC-SA): We use 1,030 examples from LIMA as an example of carefully curated data.\n* WizardLM Evol Instruct (No license provided): We subsample 30,000 examples from WizardLM, which contains distilled data of increasing diversity and complexity.\n* Open-Orca (MIT): We sample 30,000 samples generated by GPT-4 from OpenOrca, a reproduction of Orca Mukherjee et al., 2023, which augments FLAN data with additional model-generated explanations\n* Hardcoded: A collection of prompts such as 'Tell me about yourself' such that the model generates correct outputs given inquiries about its name or developers. We wrote 14 samples and repeat each sample 10 times in the mixture, resulting in 140 total samples.\n* Science: 7,544 examples from a mixture of scientific document understand tasks—including question answering, fact-checking, summarization, and information extraction (under development, standalone release soon).\n\nThese are made by taking either just the training set of the subsets or the entire section if no splits are present.\nTulu V2 is presented as a singular training split. \nTulu V2 DPO 70B, and is a fine-tuned version of Llama 2 that was trained on on a mix of publicly available, synthetic and human datasets using Direct Preference Optimization (DPO). \n\nModel Family: Other models and the dataset are found in the Tulu V2 collection.\n\nThe length distribution of the dataset can be seen below:\n<img src=\"URL alt=\"TuluV2 histogram\" width=\"600\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\nTulu V1 Mix can be found here.",
"### Science data note\nThe included science data is from the following categories: \n\n\n<img src=\"URL alt=\"TuluV2 science data mix\" width=\"600\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\nNote that some of the examples include an off-by-one error in the sentence indexing that had a small or negligible impact on performance. \nThis was found during testing and will be updated in future versions, with the detailed release of the dataset artifact itself coming in a future release.",
"### License\nWe are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset."
]
| [
"TAGS\n#task_categories-question-answering #task_categories-conversational #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-odc-by #arxiv-2305.18290 #region-us \n",
"# Dataset Card for Tulu V2 Mix\n\n*Note the ODC-BY license, indicating that different licenses apply to subsets of the data. This means that some portions of the dataset are non-commercial. We present the mixture as a research artifact.*\n\nTulu is a series of language models that are trained to act as helpful assistants. \nThe dataset consists of a mix of :\n* FLAN (Apache 2.0): We use 50,000 examples sampled from FLAN v2. To emphasize CoT-style reasoning, we sample another 50,000 examples from the CoT\nsubset of the FLAN v2 mixture.\n* Open Assistant 1 (Apache 2.0): We isolate the highest-scoring paths in each conversation tree and use these samples, resulting in 7,708 examples.\n* ShareGPT (Apache 2.0 listed, no official repo found): We use all 114,046 from our processed ShareGPT dataset, as we found ShareGPT gave strong performance in prior work.\n* GPT4-Alpaca (CC By NC 4.0):We sample 20,000 samples from GPT-4 Alpaca to further include distilled GPT-4 data.\n* Code-Alpaca (CC By NC 4.0):We use all 20,022 examples from Code Alpaca, following our prior V1 mixture, in order to improve model code abilities.\n* LIMA (CC BY-NC-SA): We use 1,030 examples from LIMA as an example of carefully curated data.\n* WizardLM Evol Instruct (No license provided): We subsample 30,000 examples from WizardLM, which contains distilled data of increasing diversity and complexity.\n* Open-Orca (MIT): We sample 30,000 samples generated by GPT-4 from OpenOrca, a reproduction of Orca Mukherjee et al., 2023, which augments FLAN data with additional model-generated explanations\n* Hardcoded: A collection of prompts such as 'Tell me about yourself' such that the model generates correct outputs given inquiries about its name or developers. We wrote 14 samples and repeat each sample 10 times in the mixture, resulting in 140 total samples.\n* Science: 7,544 examples from a mixture of scientific document understand tasks—including question answering, fact-checking, summarization, and information extraction (under development, standalone release soon).\n\nThese are made by taking either just the training set of the subsets or the entire section if no splits are present.\nTulu V2 is presented as a singular training split. \nTulu V2 DPO 70B, and is a fine-tuned version of Llama 2 that was trained on on a mix of publicly available, synthetic and human datasets using Direct Preference Optimization (DPO). \n\nModel Family: Other models and the dataset are found in the Tulu V2 collection.\n\nThe length distribution of the dataset can be seen below:\n<img src=\"URL alt=\"TuluV2 histogram\" width=\"600\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\nTulu V1 Mix can be found here.",
"### Science data note\nThe included science data is from the following categories: \n\n\n<img src=\"URL alt=\"TuluV2 science data mix\" width=\"600\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\nNote that some of the examples include an off-by-one error in the sentence indexing that had a small or negligible impact on performance. \nThis was found during testing and will be updated in future versions, with the detailed release of the dataset artifact itself coming in a future release.",
"### License\nWe are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset."
]
| [
72,
714,
126,
49
]
| [
"passage: TAGS\n#task_categories-question-answering #task_categories-conversational #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-odc-by #arxiv-2305.18290 #region-us \n"
]
|
d18d3b0a4104ad09119c4c526f42570deb050697 |
# Testing Language Models on a Held-Out High School National Finals Exam
When xAI recently released [Grok-1](https://x.ai/), they evaluated it on the 2023 Hungarian national high school finals in mathematics, which was published after the training data cutoff for all the models in their evaluation. While MATH and GSM8k are the standard benchmarks for evaluating the mathematical abilities of large language models, **there are risks that modern models overfit to these datasets**, either from training directly on the test sets or from tuning the model hyperparameters to maximize test set performance. By evaluating on a truly held out test set, we can better guage the mathematical performance of these models.
We evaluate on the [2023 Hungarian national high school finals in mathematics](https://dload-oktatas.educatio.hu/erettsegi/feladatok_2023tavasz_kozep/k_matang_23maj_fl.pdf) and grade by hand using [the provided rubric](https://dload-oktatas.educatio.hu/erettsegi/feladatok_2023tavasz_kozep/k_matang_23maj_ut.pdf). All model solutions were graded by myself over the course of one day. Model solutions were sampled using temperature 0.1.
For base models such as Code Llama, Llemma, and Mistral-7B, a 5-shot prompt was used. For instruction tuned models, we used the default prompt template for that model.
## Results
**Note**: In an earlier version of the LaTeX transcription of the exam, question 14a had incorrect formatting and question 14b did not contain all the required information to solve the problem. These issues have been fixed and the numbers are updated.
| Model | Exam Score | GSM8k | MATH |
| ------------------------------------------------------------------------------ | ---------- | ------ | ------ |
| [Code Llama 7B](https://huggingface.co/codellama/CodeLlama-7b-hf) (few-shot) | 8\% | 10.5% | 4.5% |
| [MetaMath 7B](https://huggingface.co/meta-math/MetaMath-7B-V1.0) | 20\% | 66.5\% | 19.8\% |
| [MAmmoTH 7B](https://huggingface.co/TIGER-Lab/MAmmoTH-7B) | 17\% | 50.5\% | 10.4\% |
| [MAmmoTH Coder 7B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-7B) | 11\% | 22.5\% | 7.9\% |
| [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b) (few-shot) | 23\% | 36.4\% | 18\% |
| - | - | - | - |
| [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) (few-shot) | 22\% | 39.2\% | - |
| [MetaMath Mistral 7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) | 29\% | 77.7\% | 28.2\% |
| [OpenChat 3.5](https://huggingface.co/openchat/openchat_3.5) | 37\% | 77.3\% | 28.6\% |
| - | - | - | - |
| [Code Llama 34B](https://huggingface.co/codellama/CodeLlama-34b-hf) (few-shot) | 15\% | 29.6\% | 12.2\% |
| [MAmmoTH Coder 34B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-34B) | 17\% | 34.3\% | 11.6\% |
| [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b) (few-shot) | 43\% | 51.5\% | 25.0\% |
| - | - | - | - |
| [Qwen 7B](https://huggingface.co/eleutherai/qwen-7b) | 22\% | 51.7\% | 11.6\% |
| - | - | - | - |
| [Cohere Command](https://cohere.com/) | 18\% | - | - |
| [GPT-3.5 Turbo](https://openai.com/) | 41\% | 57.1\% | 23.5\% |
| [GPT-4](https://openai.com/) | 68\% | 92.0\% | 42.5\% |
| [Claude 2](https://www.anthropic.com/) | 55\% | 88.0\% | - |
| [Grok-0 (33B)](https://x.ai/) | 37\% | 56.8\% | 15.7\% |
| [Grok-1](https://x.ai/) | 59\% | 62.9\% | 23.9\% |
## Observations
1. Plotting GSM8k performance versus performance on the exam, we can see clear evidence that several models overfit to the benchmark.

2. Despite [claiming](https://huggingface.co/openchat/openchat_3.5#comparison-with-xai-grok-models) that OpenChat 3.5 is competitive with Grok-1, it only gets around half the score on the held-out exam, indicating that is simply overfits to evaluations.
3. Llemma 34B is competitive with GPT-3.5 Turbo on the held-out exam. Further instruction tuning Llemma 34B should give even greater performance.
## Solutions
Please find model solutions and corresponding grades in the `solutions` folder.
## Running the Evaluation
To run the evaluation, run the following command:
```bash
python run_exam.py --model EleutherAI/llemma_34b --exam test/exam.csv --prompt few_shot
```
## Notes on Grading
There are a few problems which either require creating or reading a figure. For these problems, I graded the model solutions as incorrect. In the future when models have these abilities, this should be changed.
## Citation
To cite this article, use the following citation:
```bibtex
@misc{testing_language_models_on_a_held_out_high_school_national_finals_exam,
title={Testing Language Models on a Held-Out High School National Finals Exam},
author={Keiran Paster},
howpublished={\url{https://huggingface.co/datasets/keirp/hungarian_national_hs_finals_exam}},
journal = {HuggingFace repository},
year={2023},
}
```
| keirp/hungarian_national_hs_finals_exam | [
"region:us"
]
| 2023-11-13T22:48:34+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "test.csv"}]}], "dataset_info": {"features": [{"name": "Question", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 9300, "num_examples": 33}], "download_size": 6283, "dataset_size": 9300}} | 2023-12-04T18:49:36+00:00 | []
| []
| TAGS
#region-us
| Testing Language Models on a Held-Out High School National Finals Exam
======================================================================
When xAI recently released Grok-1, they evaluated it on the 2023 Hungarian national high school finals in mathematics, which was published after the training data cutoff for all the models in their evaluation. While MATH and GSM8k are the standard benchmarks for evaluating the mathematical abilities of large language models, there are risks that modern models overfit to these datasets, either from training directly on the test sets or from tuning the model hyperparameters to maximize test set performance. By evaluating on a truly held out test set, we can better guage the mathematical performance of these models.
We evaluate on the 2023 Hungarian national high school finals in mathematics and grade by hand using the provided rubric. All model solutions were graded by myself over the course of one day. Model solutions were sampled using temperature 0.1.
For base models such as Code Llama, Llemma, and Mistral-7B, a 5-shot prompt was used. For instruction tuned models, we used the default prompt template for that model.
Results
-------
Note: In an earlier version of the LaTeX transcription of the exam, question 14a had incorrect formatting and question 14b did not contain all the required information to solve the problem. These issues have been fixed and the numbers are updated.
Observations
------------
1. Plotting GSM8k performance versus performance on the exam, we can see clear evidence that several models overfit to the benchmark.
!GSM8k vs Exam
2. Despite claiming that OpenChat 3.5 is competitive with Grok-1, it only gets around half the score on the held-out exam, indicating that is simply overfits to evaluations.
3. Llemma 34B is competitive with GPT-3.5 Turbo on the held-out exam. Further instruction tuning Llemma 34B should give even greater performance.
Solutions
---------
Please find model solutions and corresponding grades in the 'solutions' folder.
Running the Evaluation
----------------------
To run the evaluation, run the following command:
Notes on Grading
----------------
There are a few problems which either require creating or reading a figure. For these problems, I graded the model solutions as incorrect. In the future when models have these abilities, this should be changed.
To cite this article, use the following citation:
| []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
41dbfc3f1de9ce91cb90bd4a73c74f099fba121a | # Dataset Card for "mozilla_commonvoice_hackathon_preprocessed_train_batch_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Jayem-11/mozilla_commonvoice_hackathon_preprocessed_train_batch_5 | [
"region:us"
]
| 2023-11-13T22:51:53+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "input_length", "dtype": "int64"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}, {"name": "labels_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 13358723437.375, "num_examples": 11733}], "download_size": 4082315471, "dataset_size": 13358723437.375}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T22:56:03+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mozilla_commonvoice_hackathon_preprocessed_train_batch_5"
More Information needed | [
"# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_5\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_5\"\n\nMore Information needed"
]
| [
6,
33
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_5\"\n\nMore Information needed"
]
|
ab93ecd31a4709b74d4224f25ae6b772fbdee0f4 | # Dataset Card for "mozilla_commonvoice_hackathon_preprocessed_train_batch_6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Jayem-11/mozilla_commonvoice_hackathon_preprocessed_train_batch_6 | [
"region:us"
]
| 2023-11-13T22:58:42+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "input_length", "dtype": "int64"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}, {"name": "labels_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 13360423472.25, "num_examples": 11734}], "download_size": 4087377242, "dataset_size": 13360423472.25}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T23:03:18+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mozilla_commonvoice_hackathon_preprocessed_train_batch_6"
More Information needed | [
"# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_6\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_6\"\n\nMore Information needed"
]
| [
6,
33
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_6\"\n\nMore Information needed"
]
|
21f510b07b569d29cc5e87a797b085a2111eebeb | # Dataset Card for "neznaika_dataset_final_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | BogdanNV/neznaika_dataset_final_test | [
"region:us"
]
| 2023-11-13T23:01:32+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1131117, "num_examples": 621}], "download_size": 166104, "dataset_size": 1131117}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T23:01:33+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "neznaika_dataset_final_test"
More Information needed | [
"# Dataset Card for \"neznaika_dataset_final_test\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"neznaika_dataset_final_test\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"neznaika_dataset_final_test\"\n\nMore Information needed"
]
|
3c02adfa6c22d8c2cc070a91d447ca8e712ac1ab | # Dataset Card for "TinyStoriesAll"
This dataset is converted from the TinyStories dataset https://huggingface.co/datasets/roneneldan/TinyStories
It includes all stories generated by both GPT-4 and GPT-3.5 without any metadata. The data has been deduplicated and had unicode errors fixed.
| Qilex/TinyStoriesAll | [
"language:en",
"region:us"
]
| 2023-11-13T23:02:10+00:00 | {"language": ["en"], "dataset_info": {"features": [{"name": "Text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3865790579, "num_examples": 4967648}], "download_size": 1986208411, "dataset_size": 3865790579}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T23:15:07+00:00 | []
| [
"en"
]
| TAGS
#language-English #region-us
| # Dataset Card for "TinyStoriesAll"
This dataset is converted from the TinyStories dataset URL
It includes all stories generated by both GPT-4 and GPT-3.5 without any metadata. The data has been deduplicated and had unicode errors fixed.
| [
"# Dataset Card for \"TinyStoriesAll\"\n\nThis dataset is converted from the TinyStories dataset URL\n\nIt includes all stories generated by both GPT-4 and GPT-3.5 without any metadata. The data has been deduplicated and had unicode errors fixed."
]
| [
"TAGS\n#language-English #region-us \n",
"# Dataset Card for \"TinyStoriesAll\"\n\nThis dataset is converted from the TinyStories dataset URL\n\nIt includes all stories generated by both GPT-4 and GPT-3.5 without any metadata. The data has been deduplicated and had unicode errors fixed."
]
| [
10,
66
]
| [
"passage: TAGS\n#language-English #region-us \n# Dataset Card for \"TinyStoriesAll\"\n\nThis dataset is converted from the TinyStories dataset URL\n\nIt includes all stories generated by both GPT-4 and GPT-3.5 without any metadata. The data has been deduplicated and had unicode errors fixed."
]
|
0aca5b467549e7dc87e10651789ab5f750058ccc | # Dataset Card for "ddr_controlnet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | marcelittle/ddr_controlnet | [
"region:us"
]
| 2023-11-13T23:27:52+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "conditioning_images", "1": "images"}}}}], "splits": [{"name": "train", "num_bytes": 31783572.0, "num_examples": 766}], "download_size": 27650923, "dataset_size": 31783572.0}} | 2023-11-13T23:45:10+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ddr_controlnet"
More Information needed | [
"# Dataset Card for \"ddr_controlnet\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ddr_controlnet\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ddr_controlnet\"\n\nMore Information needed"
]
|
90daa6c6eee11976c67ad09e04c28198c828fc4c | # Dataset Card for "D50_val_datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | TuanBC/D50_val_datasets | [
"region:us"
]
| 2023-11-13T23:31:11+00:00 | {"dataset_info": {"features": [{"name": "key", "dtype": "string"}, {"name": "audio_path", "dtype": "string"}, {"name": "text_path", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 8188711689, "num_examples": 8522}], "download_size": 1382194960, "dataset_size": 8188711689}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-13T23:39:35+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "D50_val_datasets"
More Information needed | [
"# Dataset Card for \"D50_val_datasets\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"D50_val_datasets\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"D50_val_datasets\"\n\nMore Information needed"
]
|
e6ee6d4957d46f5ba03ac5fe8c7bbaac0b551828 | # Dataset Card for "tables_qa_token_classification_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | looppayments/tables_qa_token_classification_dataset | [
"region:us"
]
| 2023-11-13T23:33:09+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": {"array3_d": {"shape": [3, 224, 224], "dtype": "float32"}}}, {"name": "input_ids", "sequence": "int64"}, {"name": "attention_mask", "sequence": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "bbox", "dtype": {"array2_d": {"shape": [512, 4], "dtype": "int64"}}}, {"name": "labels", "sequence": "int64"}, {"name": "artifact_qid", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 45728183928.665344, "num_examples": 71536}, {"name": "test", "num_bytes": 11432685215.334654, "num_examples": 17885}], "download_size": 2369838681, "dataset_size": 57160869144.0}} | 2023-12-13T00:01:55+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "tables_qa_token_classification_dataset"
More Information needed | [
"# Dataset Card for \"tables_qa_token_classification_dataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"tables_qa_token_classification_dataset\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"tables_qa_token_classification_dataset\"\n\nMore Information needed"
]
|
46a03ee028493f2cdc63047e7157f4fa8cfcf5c1 | # Dataset Card for "roleplay_realm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | higgsfield/roleplay_realm | [
"region:us"
]
| 2023-11-14T00:04:38+00:00 | {"dataset_info": {"features": [{"name": "chat", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 8884203, "num_examples": 4320}], "download_size": 3463927, "dataset_size": 8884203}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-14T00:04:40+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "roleplay_realm"
More Information needed | [
"# Dataset Card for \"roleplay_realm\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"roleplay_realm\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"roleplay_realm\"\n\nMore Information needed"
]
|
520a3aa5dffd71363d902aa2f472703485ffdc6b | <!-- To update the above `dataset_info` section, please run the following command: `datasets-cli test open_australian_legal_embeddings.py --save_info --all_configs`. -->
# **Open Australian Legal Embeddings ⚖️**
<a href="https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings" alt="Release"><img src="https://img.shields.io/badge/release-v1.0.0-green"></a>
The Open Australian Legal Embeddings are the first open-source embeddings of Australian legislative and judicial documents.
Trained on the largest open database of Australian law, the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus), the Embeddings consist of roughly 5.2 million 384-dimensional vectors embedded with [`BAAI/bge-small-en-v1.5`](https://huggingface.co/BAAI/bge-small-en-v1.5).
The Embeddings open the door to a wide range of possibilities in the field of Australian legal AI, including the development of document classifiers, search engines and chatbots.
To ensure their accessibility to as wide an audience as possible, the Embeddings are distributed under the same licence as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md).
## Usage 👩💻
The below code snippet illustrates how the Embeddings may be loaded and queried via the [Hugging Face Datasets](https://huggingface.co/docs/datasets/index) Python library:
```python
import itertools
import sklearn.metrics.pairwise
from datasets import load_dataset
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('BAAI/bge-small-en-v1.5')
instruction = 'Represent this sentence for searching relevant passages: '
# Load the embeddings.
oale = load_dataset('open_australian_legal_embeddings.py', split='train')
# Sample the first 100,000 embeddings.
sample = list(itertools.islice(oale, 100000))
# Embed a query.
query = model.encode(instruction + 'Who is the Governor-General of Australia?', normalize_embeddings=True)
# Identify the most similar embedding to the query.
similarities = sklearn.metrics.pairwise.cosine_similarity([query], [embedding['embedding'] for embedding in sample])
most_similar_index = similarities.argmax()
most_similar = sample[most_similar_index]
# Print the most similar text.
print(most_similar['text'])
```
To speed up the loading of the Embeddings, you may wish to install [`orjson`](https://github.com/ijl/orjson).
## Structure 🗂️
The Embeddings are stored in [`data/embeddings.jsonl`](https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/embeddings.jsonl), a json lines file where each line is a list of 384 32-bit floating point numbers. Associated metadata is stored in [`data/metadatas.jsonl`](https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/metadatas.jsonl) and the corresponding texts are located in [`data/texts.jsonl`](https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/texts.jsonl).
The metadata fields are the same as those used for the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus#structure-%F0%9F%97%82%EF%B8%8F), barring the `text` field, which was removed, and with the addition of the `is_last_chunk` key, which is a boolean flag for whether a text is the last chunk of a document (used to detect and remove corrupted documents when creating and updating the Embeddings).
## Creation 🧪
All documents in the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus#statistics-%F0%9F%93%8A) were split into semantically meaningful chunks up to 512-tokens-long (as determined by [`bge-small-en-v1.5`](https://huggingface.co/BAAI/bge-small-en-v1.5)'s tokeniser) with the [`semchunk`](https://github.com/umarbutler/semchunk) Python library. These chunks included a header embedding documents' titles, jurisdictions and types in the following format:
```perl
Title: {title}
Jurisdiction: {jurisdiction}
Type: {type}
{text}
```
When embedded into the above header, the names of jurisdictions were capitalised and stripped of hyphens. The `commonwealth` jurisdiction was also renamed to 'Commonwealth of Australia'. In the cases of types, `primary_legislation` became 'Act', `secondary_legislation` became 'Regulation', `bill` became 'Bill' and `decision` became 'Judgment'.
The chunks were then vectorised by [`bge-small-en-v1.5`](https://huggingface.co/BAAI/bge-small-en-v1.5) on a single GeForce RTX 2080 Ti with a batch size of 32 via the [`SentenceTransformers`](https://www.sbert.net/) library.
The resulting embeddings were serialised as json-encoded lists of floats by [`orjson`](https://github.com/ijl/orjson) and stored in [`data/embeddings.jsonl`](https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/embeddings.jsonl). The corresponding metadata and texts (with their headers removed) were saved to [`data/metadatas.jsonl`](https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/metadatas.jsonl) and [`data/texts.jsonl`](https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/texts.jsonl), respectively.
The code used to create and update the Embeddings may be found [here](https://github.com/umarbutler/open-australian-legal-embeddings-creator).
## Changelog 🔄
All notable changes to the Embeddings are documented in its [Changelog 🔄](https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings/blob/main/CHANGELOG.md).
This project adheres to [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) and [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## Licence 📜
The Embeddings are distributed under the same licence as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md).
## Citation 🔖
If you've relied on the Embeddings for your work, please cite:
```latex
@misc{butler-2023-open-australian-legal-embeddings,
author = {Butler, Umar},
year = {2023},
title = {Open Australian Legal Embeddings},
publisher = {Hugging Face},
version = {1.0.0},
doi = {10.57967/hf/1347},
url = {https://huggingface.co/datasets/umarbutler/open-australian-legal-embeddings}
}
```
## Acknowledgements 🙏
In the spirit of reconciliation, the author acknowledges the Traditional Custodians of Country throughout Australia and their connections to land, sea and community. He pays his respect to their Elders past and present and extends that respect to all Aboriginal and Torres Strait Islander peoples today.
The author thanks the creators of the many Python libraries relied upon in the creation of the Embeddings.
Finally, the author is eternally grateful for the endless support of his wife and her willingness to put up with many a late night spent writing code and quashing bugs. | umarbutler/open-australian-legal-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:no-annotation",
"language_creators:found",
"size_categories:1M<n<10M",
"source_datasets:umarbutler/open-australian-legal-corpus",
"language:en",
"license:other",
"law",
"legal",
"australia",
"embeddings",
"doi:10.57967/hf/1347",
"region:us"
]
| 2023-11-14T00:14:21+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": "other", "size_categories": ["1M<n<10M"], "source_datasets": ["umarbutler/open-australian-legal-corpus"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "pretty_name": "Open Australian Legal Embeddings", "license_name": "open-australian-legal-corpus", "license_link": "https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md", "tags": ["law", "legal", "australia", "embeddings"], "language_details": "en-AU, en-GB", "viewer": true, "dataset_info": {"features": [{"name": "version_id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "jurisdiction", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "citation", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "is_last_chunk", "dtype": "bool"}, {"name": "text", "dtype": "string"}, {"name": "embedding", "list": "float32"}], "config_name": "train", "splits": [{"name": "train", "num_bytes": 28500857221, "num_examples": 5208238}], "download_size": 45586801753, "dataset_size": 28500857221}} | 2023-12-01T05:29:52+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-no-annotation #language_creators-found #size_categories-1M<n<10M #source_datasets-umarbutler/open-australian-legal-corpus #language-English #license-other #law #legal #australia #embeddings #doi-10.57967/hf/1347 #region-us
|
# Open Australian Legal Embeddings ️
<a href="URL alt="Release"><img src="URL
The Open Australian Legal Embeddings are the first open-source embeddings of Australian legislative and judicial documents.
Trained on the largest open database of Australian law, the Open Australian Legal Corpus, the Embeddings consist of roughly 5.2 million 384-dimensional vectors embedded with 'BAAI/bge-small-en-v1.5'.
The Embeddings open the door to a wide range of possibilities in the field of Australian legal AI, including the development of document classifiers, search engines and chatbots.
To ensure their accessibility to as wide an audience as possible, the Embeddings are distributed under the same licence as the Open Australian Legal Corpus.
## Usage
The below code snippet illustrates how the Embeddings may be loaded and queried via the Hugging Face Datasets Python library:
To speed up the loading of the Embeddings, you may wish to install 'orjson'.
## Structure ️
The Embeddings are stored in 'data/URL', a json lines file where each line is a list of 384 32-bit floating point numbers. Associated metadata is stored in 'data/URL' and the corresponding texts are located in 'data/URL'.
The metadata fields are the same as those used for the Open Australian Legal Corpus, barring the 'text' field, which was removed, and with the addition of the 'is_last_chunk' key, which is a boolean flag for whether a text is the last chunk of a document (used to detect and remove corrupted documents when creating and updating the Embeddings).
## Creation
All documents in the Open Australian Legal Corpus were split into semantically meaningful chunks up to 512-tokens-long (as determined by 'bge-small-en-v1.5''s tokeniser) with the 'semchunk' Python library. These chunks included a header embedding documents' titles, jurisdictions and types in the following format:
When embedded into the above header, the names of jurisdictions were capitalised and stripped of hyphens. The 'commonwealth' jurisdiction was also renamed to 'Commonwealth of Australia'. In the cases of types, 'primary_legislation' became 'Act', 'secondary_legislation' became 'Regulation', 'bill' became 'Bill' and 'decision' became 'Judgment'.
The chunks were then vectorised by 'bge-small-en-v1.5' on a single GeForce RTX 2080 Ti with a batch size of 32 via the 'SentenceTransformers' library.
The resulting embeddings were serialised as json-encoded lists of floats by 'orjson' and stored in 'data/URL'. The corresponding metadata and texts (with their headers removed) were saved to 'data/URL' and 'data/URL', respectively.
The code used to create and update the Embeddings may be found here.
## Changelog
All notable changes to the Embeddings are documented in its Changelog .
This project adheres to Keep a Changelog and Semantic Versioning.
## Licence
The Embeddings are distributed under the same licence as the Open Australian Legal Corpus.
If you've relied on the Embeddings for your work, please cite:
## Acknowledgements
In the spirit of reconciliation, the author acknowledges the Traditional Custodians of Country throughout Australia and their connections to land, sea and community. He pays his respect to their Elders past and present and extends that respect to all Aboriginal and Torres Strait Islander peoples today.
The author thanks the creators of the many Python libraries relied upon in the creation of the Embeddings.
Finally, the author is eternally grateful for the endless support of his wife and her willingness to put up with many a late night spent writing code and quashing bugs. | [
"# Open Australian Legal Embeddings ️\n<a href=\"URL alt=\"Release\"><img src=\"URL\n\nThe Open Australian Legal Embeddings are the first open-source embeddings of Australian legislative and judicial documents.\n\nTrained on the largest open database of Australian law, the Open Australian Legal Corpus, the Embeddings consist of roughly 5.2 million 384-dimensional vectors embedded with 'BAAI/bge-small-en-v1.5'.\n\nThe Embeddings open the door to a wide range of possibilities in the field of Australian legal AI, including the development of document classifiers, search engines and chatbots.\n\nTo ensure their accessibility to as wide an audience as possible, the Embeddings are distributed under the same licence as the Open Australian Legal Corpus.",
"## Usage \nThe below code snippet illustrates how the Embeddings may be loaded and queried via the Hugging Face Datasets Python library:\n\n\nTo speed up the loading of the Embeddings, you may wish to install 'orjson'.",
"## Structure ️\nThe Embeddings are stored in 'data/URL', a json lines file where each line is a list of 384 32-bit floating point numbers. Associated metadata is stored in 'data/URL' and the corresponding texts are located in 'data/URL'.\n\nThe metadata fields are the same as those used for the Open Australian Legal Corpus, barring the 'text' field, which was removed, and with the addition of the 'is_last_chunk' key, which is a boolean flag for whether a text is the last chunk of a document (used to detect and remove corrupted documents when creating and updating the Embeddings).",
"## Creation \nAll documents in the Open Australian Legal Corpus were split into semantically meaningful chunks up to 512-tokens-long (as determined by 'bge-small-en-v1.5''s tokeniser) with the 'semchunk' Python library. These chunks included a header embedding documents' titles, jurisdictions and types in the following format:\n\n\nWhen embedded into the above header, the names of jurisdictions were capitalised and stripped of hyphens. The 'commonwealth' jurisdiction was also renamed to 'Commonwealth of Australia'. In the cases of types, 'primary_legislation' became 'Act', 'secondary_legislation' became 'Regulation', 'bill' became 'Bill' and 'decision' became 'Judgment'.\n\nThe chunks were then vectorised by 'bge-small-en-v1.5' on a single GeForce RTX 2080 Ti with a batch size of 32 via the 'SentenceTransformers' library.\n\nThe resulting embeddings were serialised as json-encoded lists of floats by 'orjson' and stored in 'data/URL'. The corresponding metadata and texts (with their headers removed) were saved to 'data/URL' and 'data/URL', respectively.\n\nThe code used to create and update the Embeddings may be found here.",
"## Changelog \nAll notable changes to the Embeddings are documented in its Changelog .\n\nThis project adheres to Keep a Changelog and Semantic Versioning.",
"## Licence \nThe Embeddings are distributed under the same licence as the Open Australian Legal Corpus.\n\n\nIf you've relied on the Embeddings for your work, please cite:",
"## Acknowledgements \nIn the spirit of reconciliation, the author acknowledges the Traditional Custodians of Country throughout Australia and their connections to land, sea and community. He pays his respect to their Elders past and present and extends that respect to all Aboriginal and Torres Strait Islander peoples today.\n\nThe author thanks the creators of the many Python libraries relied upon in the creation of the Embeddings.\n\nFinally, the author is eternally grateful for the endless support of his wife and her willingness to put up with many a late night spent writing code and quashing bugs."
]
| [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-no-annotation #language_creators-found #size_categories-1M<n<10M #source_datasets-umarbutler/open-australian-legal-corpus #language-English #license-other #law #legal #australia #embeddings #doi-10.57967/hf/1347 #region-us \n",
"# Open Australian Legal Embeddings ️\n<a href=\"URL alt=\"Release\"><img src=\"URL\n\nThe Open Australian Legal Embeddings are the first open-source embeddings of Australian legislative and judicial documents.\n\nTrained on the largest open database of Australian law, the Open Australian Legal Corpus, the Embeddings consist of roughly 5.2 million 384-dimensional vectors embedded with 'BAAI/bge-small-en-v1.5'.\n\nThe Embeddings open the door to a wide range of possibilities in the field of Australian legal AI, including the development of document classifiers, search engines and chatbots.\n\nTo ensure their accessibility to as wide an audience as possible, the Embeddings are distributed under the same licence as the Open Australian Legal Corpus.",
"## Usage \nThe below code snippet illustrates how the Embeddings may be loaded and queried via the Hugging Face Datasets Python library:\n\n\nTo speed up the loading of the Embeddings, you may wish to install 'orjson'.",
"## Structure ️\nThe Embeddings are stored in 'data/URL', a json lines file where each line is a list of 384 32-bit floating point numbers. Associated metadata is stored in 'data/URL' and the corresponding texts are located in 'data/URL'.\n\nThe metadata fields are the same as those used for the Open Australian Legal Corpus, barring the 'text' field, which was removed, and with the addition of the 'is_last_chunk' key, which is a boolean flag for whether a text is the last chunk of a document (used to detect and remove corrupted documents when creating and updating the Embeddings).",
"## Creation \nAll documents in the Open Australian Legal Corpus were split into semantically meaningful chunks up to 512-tokens-long (as determined by 'bge-small-en-v1.5''s tokeniser) with the 'semchunk' Python library. These chunks included a header embedding documents' titles, jurisdictions and types in the following format:\n\n\nWhen embedded into the above header, the names of jurisdictions were capitalised and stripped of hyphens. The 'commonwealth' jurisdiction was also renamed to 'Commonwealth of Australia'. In the cases of types, 'primary_legislation' became 'Act', 'secondary_legislation' became 'Regulation', 'bill' became 'Bill' and 'decision' became 'Judgment'.\n\nThe chunks were then vectorised by 'bge-small-en-v1.5' on a single GeForce RTX 2080 Ti with a batch size of 32 via the 'SentenceTransformers' library.\n\nThe resulting embeddings were serialised as json-encoded lists of floats by 'orjson' and stored in 'data/URL'. The corresponding metadata and texts (with their headers removed) were saved to 'data/URL' and 'data/URL', respectively.\n\nThe code used to create and update the Embeddings may be found here.",
"## Changelog \nAll notable changes to the Embeddings are documented in its Changelog .\n\nThis project adheres to Keep a Changelog and Semantic Versioning.",
"## Licence \nThe Embeddings are distributed under the same licence as the Open Australian Legal Corpus.\n\n\nIf you've relied on the Embeddings for your work, please cite:",
"## Acknowledgements \nIn the spirit of reconciliation, the author acknowledges the Traditional Custodians of Country throughout Australia and their connections to land, sea and community. He pays his respect to their Elders past and present and extends that respect to all Aboriginal and Torres Strait Islander peoples today.\n\nThe author thanks the creators of the many Python libraries relied upon in the creation of the Embeddings.\n\nFinally, the author is eternally grateful for the endless support of his wife and her willingness to put up with many a late night spent writing code and quashing bugs."
]
| [
117,
182,
60,
156,
334,
37,
41,
129
]
| [
"passage: TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-no-annotation #language_creators-found #size_categories-1M<n<10M #source_datasets-umarbutler/open-australian-legal-corpus #language-English #license-other #law #legal #australia #embeddings #doi-10.57967/hf/1347 #region-us \n# Open Australian Legal Embeddings ️\n<a href=\"URL alt=\"Release\"><img src=\"URL\n\nThe Open Australian Legal Embeddings are the first open-source embeddings of Australian legislative and judicial documents.\n\nTrained on the largest open database of Australian law, the Open Australian Legal Corpus, the Embeddings consist of roughly 5.2 million 384-dimensional vectors embedded with 'BAAI/bge-small-en-v1.5'.\n\nThe Embeddings open the door to a wide range of possibilities in the field of Australian legal AI, including the development of document classifiers, search engines and chatbots.\n\nTo ensure their accessibility to as wide an audience as possible, the Embeddings are distributed under the same licence as the Open Australian Legal Corpus.## Usage \nThe below code snippet illustrates how the Embeddings may be loaded and queried via the Hugging Face Datasets Python library:\n\n\nTo speed up the loading of the Embeddings, you may wish to install 'orjson'.",
"passage: ## Structure ️\nThe Embeddings are stored in 'data/URL', a json lines file where each line is a list of 384 32-bit floating point numbers. Associated metadata is stored in 'data/URL' and the corresponding texts are located in 'data/URL'.\n\nThe metadata fields are the same as those used for the Open Australian Legal Corpus, barring the 'text' field, which was removed, and with the addition of the 'is_last_chunk' key, which is a boolean flag for whether a text is the last chunk of a document (used to detect and remove corrupted documents when creating and updating the Embeddings).## Creation \nAll documents in the Open Australian Legal Corpus were split into semantically meaningful chunks up to 512-tokens-long (as determined by 'bge-small-en-v1.5''s tokeniser) with the 'semchunk' Python library. These chunks included a header embedding documents' titles, jurisdictions and types in the following format:\n\n\nWhen embedded into the above header, the names of jurisdictions were capitalised and stripped of hyphens. The 'commonwealth' jurisdiction was also renamed to 'Commonwealth of Australia'. In the cases of types, 'primary_legislation' became 'Act', 'secondary_legislation' became 'Regulation', 'bill' became 'Bill' and 'decision' became 'Judgment'.\n\nThe chunks were then vectorised by 'bge-small-en-v1.5' on a single GeForce RTX 2080 Ti with a batch size of 32 via the 'SentenceTransformers' library.\n\nThe resulting embeddings were serialised as json-encoded lists of floats by 'orjson' and stored in 'data/URL'. The corresponding metadata and texts (with their headers removed) were saved to 'data/URL' and 'data/URL', respectively.\n\nThe code used to create and update the Embeddings may be found here.## Changelog \nAll notable changes to the Embeddings are documented in its Changelog .\n\nThis project adheres to Keep a Changelog and Semantic Versioning.## Licence \nThe Embeddings are distributed under the same licence as the Open Australian Legal Corpus.\n\n\nIf you've relied on the Embeddings for your work, please cite:"
]
|
a47d212f521440a6e4c3be1d6ef41cb0ac76c87a | # Bộ dữ liệu về các ngành trên đại học | H4438/education-major | [
"region:us"
]
| 2023-11-14T01:28:28+00:00 | {} | 2023-11-16T08:03:17+00:00 | []
| []
| TAGS
#region-us
| # Bộ dữ liệu về các ngành trên đại học | [
"# Bộ dữ liệu về các ngành trên đại học"
]
| [
"TAGS\n#region-us \n",
"# Bộ dữ liệu về các ngành trên đại học"
]
| [
6,
10
]
| [
"passage: TAGS\n#region-us \n# Bộ dữ liệu về các ngành trên đại học"
]
|
772bfbf6106d137962e185dbf25e6c687fa10d30 | # Dataset Card for "hacker_news_top_comment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | higgsfield/hacker_news_top_comment | [
"region:us"
]
| 2023-11-14T01:34:14+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 77485794, "num_examples": 118779}], "download_size": 52065753, "dataset_size": 77485794}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-14T02:01:02+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "hacker_news_top_comment"
More Information needed | [
"# Dataset Card for \"hacker_news_top_comment\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"hacker_news_top_comment\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"hacker_news_top_comment\"\n\nMore Information needed"
]
|
9605ca3d16b07c46609287aeb7eb431ae1340d62 | # Dataset Card for "year"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jxie/year | [
"region:us"
]
| 2023-11-14T01:36:27+00:00 | {"dataset_info": {"features": [{"name": "inputs", "sequence": "float64"}, {"name": "label", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 271551504, "num_examples": 370972}, {"name": "val", "num_bytes": 67887876, "num_examples": 92743}, {"name": "test", "num_bytes": 37793160, "num_examples": 51630}], "download_size": 385557206, "dataset_size": 377232540}} | 2023-11-14T01:36:57+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "year"
More Information needed | [
"# Dataset Card for \"year\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"year\"\n\nMore Information needed"
]
| [
6,
11
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"year\"\n\nMore Information needed"
]
|
6b2c2e4deaf36d83686a30ce829e31221b90ca9a | Bộ QA tập trung về hỏi thông tin liên lạc của trường | H4438/education-QA-contact | [
"region:us"
]
| 2023-11-14T01:38:22+00:00 | {} | 2023-11-14T01:40:00+00:00 | []
| []
| TAGS
#region-us
| Bộ QA tập trung về hỏi thông tin liên lạc của trường | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
d1371c3551be8531f557c9d96dce16d4ce03c029 | # Dataset Card for "esg2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bh8648/esg2 | [
"region:us"
]
| 2023-11-14T01:50:38+00:00 | {"dataset_info": {"features": [{"name": "Major Category", "dtype": "string"}, {"name": "Middle Categoty", "dtype": "string"}, {"name": "Small Category", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 176318, "num_examples": 45}], "download_size": 92264, "dataset_size": 176318}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-14T01:50:40+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "esg2"
More Information needed | [
"# Dataset Card for \"esg2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"esg2\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"esg2\"\n\nMore Information needed"
]
|
3de89c33b278904d10a39007076cefad7e25e977 | # Dataset Card for "Gregmat_GRE_frequency"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | chirunder/Gregmat_GRE_frequency | [
"region:us"
]
| 2023-11-14T01:57:21+00:00 | {"dataset_info": {"features": [{"name": "word", "dtype": "string"}, {"name": "frequency", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 22450, "num_examples": 1111}], "download_size": 13408, "dataset_size": 22450}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-14T01:57:22+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "Gregmat_GRE_frequency"
More Information needed | [
"# Dataset Card for \"Gregmat_GRE_frequency\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"Gregmat_GRE_frequency\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"Gregmat_GRE_frequency\"\n\nMore Information needed"
]
|
3b623ca11f477b0016e105158e300eb5255aa896 |
# Gendec: Gender Dection from Japanese Names with Machine Learning
This is the official repository for the Gendec framework from the paper [Gendec: Gender Dection from Japanese Names with Machine Learning](https://arxiv.org/pdf/2311.11001.pdf), which was accepted at the [ISDA'23](https://www.mirlabs.org/isda23/).
# Citation Information
The provided dataset is only used for research purposes!
```
@misc{pham2023gendec,
title={Gendec: A Machine Learning-based Framework for Gender Detection from Japanese Names},
author={Duong Tien Pham and Luan Thanh Nguyen},
year={2023},
eprint={2311.11001},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Abstract
Every human has their own name, a fundamental aspect of their identity and cultural heritage. The name often conveys a wealth of information, including details about an individual's background, ethnicity, and, especially, their gender. By detecting gender through the analysis of names, researchers can unlock valuable insights into linguistic patterns and cultural norms, which can be applied to practical applications. Hence, this work presents a novel dataset for Japanese name gender detection comprising 64,139 full names in romaji, hiragana, and kanji forms, along with their biological genders. Moreover, we propose Gendec, a framework for gender detection from Japanese names that leverages diverse approaches, including traditional machine learning techniques or cutting-edge transfer learning models, to predict the gender associated with Japanese names accurately. Through a thorough investigation, the proposed framework is expected to be effective and serve potential applications in various domains.
# Dataset
The Gendec dataset is consist of 64,139 Japanese names with biological genders in kanji, hiragana, and romaji form.
The dataset is divided into three parts as below:
1. Train set: 44.9K question-answer pairs
2. Valid set: 6.41 question-answer pairs
3. Test set: 12.8 question-answer pairs
# Contact
Please feel free to contact us by email [email protected] if you have any further information! | tarudesu/gendec-dataset | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:ja",
"code",
"arxiv:2311.11001",
"region:us"
]
| 2023-11-14T01:59:12+00:00 | {"language": ["ja"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "pretty_name": "Japanese Gender Detection Based on Names Dataset", "tags": ["code"], "datasets": ["tarudesu/gendec-dataset"], "metrics": ["f1"], "library_name": "transformers"} | 2023-11-25T02:08:48+00:00 | [
"2311.11001"
]
| [
"ja"
]
| TAGS
#task_categories-text-classification #size_categories-10K<n<100K #language-Japanese #code #arxiv-2311.11001 #region-us
|
# Gendec: Gender Dection from Japanese Names with Machine Learning
This is the official repository for the Gendec framework from the paper Gendec: Gender Dection from Japanese Names with Machine Learning, which was accepted at the ISDA'23.
The provided dataset is only used for research purposes!
# Abstract
Every human has their own name, a fundamental aspect of their identity and cultural heritage. The name often conveys a wealth of information, including details about an individual's background, ethnicity, and, especially, their gender. By detecting gender through the analysis of names, researchers can unlock valuable insights into linguistic patterns and cultural norms, which can be applied to practical applications. Hence, this work presents a novel dataset for Japanese name gender detection comprising 64,139 full names in romaji, hiragana, and kanji forms, along with their biological genders. Moreover, we propose Gendec, a framework for gender detection from Japanese names that leverages diverse approaches, including traditional machine learning techniques or cutting-edge transfer learning models, to predict the gender associated with Japanese names accurately. Through a thorough investigation, the proposed framework is expected to be effective and serve potential applications in various domains.
# Dataset
The Gendec dataset is consist of 64,139 Japanese names with biological genders in kanji, hiragana, and romaji form.
The dataset is divided into three parts as below:
1. Train set: 44.9K question-answer pairs
2. Valid set: 6.41 question-answer pairs
3. Test set: 12.8 question-answer pairs
# Contact
Please feel free to contact us by email luannt@URL if you have any further information! | [
"# Gendec: Gender Dection from Japanese Names with Machine Learning\nThis is the official repository for the Gendec framework from the paper Gendec: Gender Dection from Japanese Names with Machine Learning, which was accepted at the ISDA'23.\n\n\nThe provided dataset is only used for research purposes!",
"# Abstract\nEvery human has their own name, a fundamental aspect of their identity and cultural heritage. The name often conveys a wealth of information, including details about an individual's background, ethnicity, and, especially, their gender. By detecting gender through the analysis of names, researchers can unlock valuable insights into linguistic patterns and cultural norms, which can be applied to practical applications. Hence, this work presents a novel dataset for Japanese name gender detection comprising 64,139 full names in romaji, hiragana, and kanji forms, along with their biological genders. Moreover, we propose Gendec, a framework for gender detection from Japanese names that leverages diverse approaches, including traditional machine learning techniques or cutting-edge transfer learning models, to predict the gender associated with Japanese names accurately. Through a thorough investigation, the proposed framework is expected to be effective and serve potential applications in various domains.",
"# Dataset\nThe Gendec dataset is consist of 64,139 Japanese names with biological genders in kanji, hiragana, and romaji form.\n\nThe dataset is divided into three parts as below:\n1. Train set: 44.9K question-answer pairs\n2. Valid set: 6.41 question-answer pairs\n3. Test set: 12.8 question-answer pairs",
"# Contact\nPlease feel free to contact us by email luannt@URL if you have any further information!"
]
| [
"TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-Japanese #code #arxiv-2311.11001 #region-us \n",
"# Gendec: Gender Dection from Japanese Names with Machine Learning\nThis is the official repository for the Gendec framework from the paper Gendec: Gender Dection from Japanese Names with Machine Learning, which was accepted at the ISDA'23.\n\n\nThe provided dataset is only used for research purposes!",
"# Abstract\nEvery human has their own name, a fundamental aspect of their identity and cultural heritage. The name often conveys a wealth of information, including details about an individual's background, ethnicity, and, especially, their gender. By detecting gender through the analysis of names, researchers can unlock valuable insights into linguistic patterns and cultural norms, which can be applied to practical applications. Hence, this work presents a novel dataset for Japanese name gender detection comprising 64,139 full names in romaji, hiragana, and kanji forms, along with their biological genders. Moreover, we propose Gendec, a framework for gender detection from Japanese names that leverages diverse approaches, including traditional machine learning techniques or cutting-edge transfer learning models, to predict the gender associated with Japanese names accurately. Through a thorough investigation, the proposed framework is expected to be effective and serve potential applications in various domains.",
"# Dataset\nThe Gendec dataset is consist of 64,139 Japanese names with biological genders in kanji, hiragana, and romaji form.\n\nThe dataset is divided into three parts as below:\n1. Train set: 44.9K question-answer pairs\n2. Valid set: 6.41 question-answer pairs\n3. Test set: 12.8 question-answer pairs",
"# Contact\nPlease feel free to contact us by email luannt@URL if you have any further information!"
]
| [
46,
68,
207,
84,
22
]
| [
"passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-Japanese #code #arxiv-2311.11001 #region-us \n# Gendec: Gender Dection from Japanese Names with Machine Learning\nThis is the official repository for the Gendec framework from the paper Gendec: Gender Dection from Japanese Names with Machine Learning, which was accepted at the ISDA'23.\n\n\nThe provided dataset is only used for research purposes!# Abstract\nEvery human has their own name, a fundamental aspect of their identity and cultural heritage. The name often conveys a wealth of information, including details about an individual's background, ethnicity, and, especially, their gender. By detecting gender through the analysis of names, researchers can unlock valuable insights into linguistic patterns and cultural norms, which can be applied to practical applications. Hence, this work presents a novel dataset for Japanese name gender detection comprising 64,139 full names in romaji, hiragana, and kanji forms, along with their biological genders. Moreover, we propose Gendec, a framework for gender detection from Japanese names that leverages diverse approaches, including traditional machine learning techniques or cutting-edge transfer learning models, to predict the gender associated with Japanese names accurately. Through a thorough investigation, the proposed framework is expected to be effective and serve potential applications in various domains.# Dataset\nThe Gendec dataset is consist of 64,139 Japanese names with biological genders in kanji, hiragana, and romaji form.\n\nThe dataset is divided into three parts as below:\n1. Train set: 44.9K question-answer pairs\n2. Valid set: 6.41 question-answer pairs\n3. Test set: 12.8 question-answer pairs# Contact\nPlease feel free to contact us by email luannt@URL if you have any further information!"
]
|
6b0cd7370d51169dd065f37912a0ae260f7b32da | # Dataset Card for "complexquestion_2WIKIMQA_100_Mistral"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | presencesw/complexquestion_2WIKIMQA_100_Mistral | [
"region:us"
]
| 2023-11-14T02:00:01+00:00 | {"dataset_info": {"features": [{"name": "entities", "sequence": "null"}, {"name": "triplets", "sequence": "null"}, {"name": "answer", "dtype": "string"}, {"name": "complex_question", "dtype": "string"}, {"name": "system_prompt", "dtype": "string"}, {"name": "user_prompt", "dtype": "string"}, {"name": "llm_output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 757515, "num_examples": 100}], "download_size": 92013, "dataset_size": 757515}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-14T02:00:08+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "complexquestion_2WIKIMQA_100_Mistral"
More Information needed | [
"# Dataset Card for \"complexquestion_2WIKIMQA_100_Mistral\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"complexquestion_2WIKIMQA_100_Mistral\"\n\nMore Information needed"
]
| [
6,
24
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"complexquestion_2WIKIMQA_100_Mistral\"\n\nMore Information needed"
]
|
5a13b4e9b19c68f9159e76e3a153125da3c39495 | # Dataset Card for "law_court_opinion_casual"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zxvix/law_court_opinion_casual | [
"region:us"
]
| 2023-11-14T02:11:10+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 158304.0, "num_examples": 100}], "download_size": 107312, "dataset_size": 158304.0}} | 2023-11-14T02:35:10+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "law_court_opinion_casual"
More Information needed | [
"# Dataset Card for \"law_court_opinion_casual\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"law_court_opinion_casual\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"law_court_opinion_casual\"\n\nMore Information needed"
]
|
2523cf4a4b078d4c928e688842526b7ec9f5a84a | # Dataset Card for "best_outputs_model1_v_model2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Rewcifer/best_outputs_model1_v_model2 | [
"region:us"
]
| 2023-11-14T02:16:16+00:00 | {"dataset_info": {"features": [{"name": "true_findings", "dtype": "string"}, {"name": "generated_texts_1", "dtype": "string"}, {"name": "generated_texts_2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1751637, "num_examples": 861}], "download_size": 755469, "dataset_size": 1751637}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-14T02:16:21+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "best_outputs_model1_v_model2"
More Information needed | [
"# Dataset Card for \"best_outputs_model1_v_model2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"best_outputs_model1_v_model2\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"best_outputs_model1_v_model2\"\n\nMore Information needed"
]
|
cee0a0ad22832d429136e6f3534187f7ff9a4f5c |
wikipedia 日本語の文を、各種日本語の embeddings や faiss index へと変換したもの。
- [RAG用途に使える、Wikipedia 日本語の embeddings とベクトル検索用の faiss index を作った](https://secon.dev/entry/2023/12/04/080000-wikipedia-ja-embeddings/)
- [HuggingFace Space 上のデモ](https://huggingface.co/spaces/hotchpotch/wikipedia-japanese-rag-qa)
- [変換スクリプト](https://github.com/hotchpotch/wikipedia-passages-jawiki-embeddings-utils)
## 大元のデータ
- https://huggingface.co/datasets/singletongue/wikipedia-utils
## 検索タスクでのデータ評価
- [ベクトル検索のみで、AI王クイズ第一回コンペに臨む - Q&Aタスクでの複数の日本語embeddingsの評価](https://secon.dev/entry/2023/12/21/080000-vector-search-ai-ou-comp/)
- [OpenAIの新embeddings,text-embedding-3-smallをRAGタスクで評価する](https://secon.dev/entry/2024/01/29/100000-text-embedding-3-small/)
## ライセンス
- `text-embedding-*` のファイルは OpenAI のライセンスに従います。
- それ以外は `CC-BY-SA-4.0` です
| hotchpotch/wikipedia-passages-jawiki-embeddings | [
"language:ja",
"license:other",
"region:us"
]
| 2023-11-14T02:28:33+00:00 | {"language": ["ja"], "license": "other"} | 2024-01-29T01:36:49+00:00 | []
| [
"ja"
]
| TAGS
#language-Japanese #license-other #region-us
|
wikipedia 日本語の文を、各種日本語の embeddings や faiss index へと変換したもの。
- RAG用途に使える、Wikipedia 日本語の embeddings とベクトル検索用の faiss index を作った
- HuggingFace Space 上のデモ
- 変換スクリプト
## 大元のデータ
- URL
## 検索タスクでのデータ評価
- ベクトル検索のみで、AI王クイズ第一回コンペに臨む - Q&Aタスクでの複数の日本語embeddingsの評価
- OpenAIの新embeddings,text-embedding-3-smallをRAGタスクで評価する
## ライセンス
- 'text-embedding-*' のファイルは OpenAI のライセンスに従います。
- それ以外は 'CC-BY-SA-4.0' です
| [
"## 大元のデータ\n\n- URL",
"## 検索タスクでのデータ評価\n\n- ベクトル検索のみで、AI王クイズ第一回コンペに臨む - Q&Aタスクでの複数の日本語embeddingsの評価\n- OpenAIの新embeddings,text-embedding-3-smallをRAGタスクで評価する",
"## ライセンス\n\n- 'text-embedding-*' のファイルは OpenAI のライセンスに従います。\n- それ以外は 'CC-BY-SA-4.0' です"
]
| [
"TAGS\n#language-Japanese #license-other #region-us \n",
"## 大元のデータ\n\n- URL",
"## 検索タスクでのデータ評価\n\n- ベクトル検索のみで、AI王クイズ第一回コンペに臨む - Q&Aタスクでの複数の日本語embeddingsの評価\n- OpenAIの新embeddings,text-embedding-3-smallをRAGタスクで評価する",
"## ライセンス\n\n- 'text-embedding-*' のファイルは OpenAI のライセンスに従います。\n- それ以外は 'CC-BY-SA-4.0' です"
]
| [
17,
7,
68,
44
]
| [
"passage: TAGS\n#language-Japanese #license-other #region-us \n## 大元のデータ\n\n- URL## 検索タスクでのデータ評価\n\n- ベクトル検索のみで、AI王クイズ第一回コンペに臨む - Q&Aタスクでの複数の日本語embeddingsの評価\n- OpenAIの新embeddings,text-embedding-3-smallをRAGタスクで評価する## ライセンス\n\n- 'text-embedding-*' のファイルは OpenAI のライセンスに従います。\n- それ以外は 'CC-BY-SA-4.0' です"
]
|
da2a45b3f1aa5b705f4e98063f5c0874f2cccf4a |
## Dataset Description
- **Homepage:** [ChinaOpen homepage](https://ruc-aimc-lab.github.io/ChinaOpen/)
- **Paper:** [ChinaOpen: A Dataset for Open-World Multimodal Learning](https://doi.org/10.1145/3581783.3612156)
- **Point of Contact:** [Aozhu Chen]([email protected])
### Dataset Summary
ChinaOpen-1k is a dataset sourced from Bilibili, a popular Chinese video-sharing website. It is a manually annotated test set of videos, including manually checked user titles/tags, manually written captions, and manual labels describing the visual objects/actions/scenes shown in the content.
### Languages
Chinese and English
## Dataset Structure
All the files are put in a zip package.
```bash
├── ChinaOpen-1k
├── video01.mp4
├── video02.mp4
├── video03.mp4
├── [...]
└── ChinaOpen-1k-annotations.json
```
### Data Instances
Please refer to https://ruc-aimc-lab.github.io/ChinaOpen/#examples | AIMClab/ChinaOpen | [
"size_categories:1K<n<10K",
"language:zh",
"license:cc-by-nc-sa-4.0",
"region:us"
]
| 2023-11-14T02:35:42+00:00 | {"language": ["zh"], "license": "cc-by-nc-sa-4.0", "size_categories": ["1K<n<10K"]} | 2023-11-15T15:50:34+00:00 | []
| [
"zh"
]
| TAGS
#size_categories-1K<n<10K #language-Chinese #license-cc-by-nc-sa-4.0 #region-us
|
## Dataset Description
- Homepage: ChinaOpen homepage
- Paper: ChinaOpen: A Dataset for Open-World Multimodal Learning
- Point of Contact: Aozhu Chen
### Dataset Summary
ChinaOpen-1k is a dataset sourced from Bilibili, a popular Chinese video-sharing website. It is a manually annotated test set of videos, including manually checked user titles/tags, manually written captions, and manual labels describing the visual objects/actions/scenes shown in the content.
### Languages
Chinese and English
## Dataset Structure
All the files are put in a zip package.
### Data Instances
Please refer to URL | [
"## Dataset Description\n\n\n- Homepage: ChinaOpen homepage\n- Paper: ChinaOpen: A Dataset for Open-World Multimodal Learning\n- Point of Contact: Aozhu Chen",
"### Dataset Summary\nChinaOpen-1k is a dataset sourced from Bilibili, a popular Chinese video-sharing website. It is a manually annotated test set of videos, including manually checked user titles/tags, manually written captions, and manual labels describing the visual objects/actions/scenes shown in the content.",
"### Languages\nChinese and English",
"## Dataset Structure\nAll the files are put in a zip package.",
"### Data Instances\nPlease refer to URL"
]
| [
"TAGS\n#size_categories-1K<n<10K #language-Chinese #license-cc-by-nc-sa-4.0 #region-us \n",
"## Dataset Description\n\n\n- Homepage: ChinaOpen homepage\n- Paper: ChinaOpen: A Dataset for Open-World Multimodal Learning\n- Point of Contact: Aozhu Chen",
"### Dataset Summary\nChinaOpen-1k is a dataset sourced from Bilibili, a popular Chinese video-sharing website. It is a manually annotated test set of videos, including manually checked user titles/tags, manually written captions, and manual labels describing the visual objects/actions/scenes shown in the content.",
"### Languages\nChinese and English",
"## Dataset Structure\nAll the files are put in a zip package.",
"### Data Instances\nPlease refer to URL"
]
| [
36,
37,
80,
7,
16,
10
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #language-Chinese #license-cc-by-nc-sa-4.0 #region-us \n## Dataset Description\n\n\n- Homepage: ChinaOpen homepage\n- Paper: ChinaOpen: A Dataset for Open-World Multimodal Learning\n- Point of Contact: Aozhu Chen### Dataset Summary\nChinaOpen-1k is a dataset sourced from Bilibili, a popular Chinese video-sharing website. It is a manually annotated test set of videos, including manually checked user titles/tags, manually written captions, and manual labels describing the visual objects/actions/scenes shown in the content.### Languages\nChinese and English## Dataset Structure\nAll the files are put in a zip package.### Data Instances\nPlease refer to URL"
]
|
801541026a80b4e236ce660ad790d4a1df8ba8ee | # Dataset Card for "synpre_extract_q10_a5_1M_q_middle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/synpre_extract_q10_a5_1M_q_middle | [
"region:us"
]
| 2023-11-14T02:45:13+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 9241458, "num_examples": 9777}, {"name": "train", "num_bytes": 925944617, "num_examples": 976352}], "download_size": 545393918, "dataset_size": 935186075}} | 2023-11-14T02:46:23+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "synpre_extract_q10_a5_1M_q_middle"
More Information needed | [
"# Dataset Card for \"synpre_extract_q10_a5_1M_q_middle\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"synpre_extract_q10_a5_1M_q_middle\"\n\nMore Information needed"
]
| [
6,
28
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"synpre_extract_q10_a5_1M_q_middle\"\n\nMore Information needed"
]
|
fb96b924b75e36a6df2f1dbd4d71777db3144dfb |
This collection of data includes over seventeen million global companies. The dataset has information such as a company's name, website domain, size, year founded, industry, city/state, country and the handle of their LinkedIn URL.
Schema, data stats, general documentation, and other datasets can be found at: https://docs.bigpicture.io/docs/free-datasets/companies/ | bigpictureio/companies-2023-q4-sm | [
"task_categories:feature-extraction",
"size_categories:10M<n<100M",
"language:en",
"license:odc-by",
"finance",
"region:us"
]
| 2023-11-14T02:45:57+00:00 | {"language": ["en"], "license": "odc-by", "size_categories": ["10M<n<100M"], "task_categories": ["feature-extraction"], "pretty_name": "BigPicture 2023 Q4 Free 17M+ Company Dataset", "tags": ["finance"]} | 2023-11-14T02:53:53+00:00 | []
| [
"en"
]
| TAGS
#task_categories-feature-extraction #size_categories-10M<n<100M #language-English #license-odc-by #finance #region-us
|
This collection of data includes over seventeen million global companies. The dataset has information such as a company's name, website domain, size, year founded, industry, city/state, country and the handle of their LinkedIn URL.
Schema, data stats, general documentation, and other datasets can be found at: URL | []
| [
"TAGS\n#task_categories-feature-extraction #size_categories-10M<n<100M #language-English #license-odc-by #finance #region-us \n"
]
| [
45
]
| [
"passage: TAGS\n#task_categories-feature-extraction #size_categories-10M<n<100M #language-English #license-odc-by #finance #region-us \n"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.