sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
tokens_length
sequencelengths
1
353
input_texts
sequencelengths
1
40
c991895b7462453f1ec2fe80d9f51810bd2474c7
# Dataset Card for Evaluation run of Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Weyaxi__MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-08T05:11:37.271243](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp/blob/main/results_2024-01-08T05-11-37.271243.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6464533975388377, "acc_stderr": 0.032163810731246786, "acc_norm": 0.6464814911400231, "acc_norm_stderr": 0.03282564461917708, "mc1": 0.39412484700122397, "mc1_stderr": 0.017106588140700322, "mc2": 0.5513669244614883, "mc2_stderr": 0.015335304188531462 }, "harness|arc:challenge|25": { "acc": 0.6254266211604096, "acc_stderr": 0.014144193471893449, "acc_norm": 0.6459044368600683, "acc_norm_stderr": 0.013975454122756562 }, "harness|hellaswag|10": { "acc": 0.6642103166699861, "acc_stderr": 0.004713006072807707, "acc_norm": 0.8537143995220076, "acc_norm_stderr": 0.0035267007418794435 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.31, "acc_stderr": 0.046482319871173156, "acc_norm": 0.31, "acc_norm_stderr": 0.046482319871173156 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6370370370370371, "acc_stderr": 0.04153948404742398, "acc_norm": 0.6370370370370371, "acc_norm_stderr": 0.04153948404742398 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6907894736842105, "acc_stderr": 0.037610708698674805, "acc_norm": 0.6907894736842105, "acc_norm_stderr": 0.037610708698674805 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.55, "acc_stderr": 0.049999999999999996, "acc_norm": 0.55, "acc_norm_stderr": 0.049999999999999996 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7132075471698113, "acc_stderr": 0.02783491252754406, "acc_norm": 0.7132075471698113, "acc_norm_stderr": 0.02783491252754406 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.75, "acc_stderr": 0.03621034121889507, "acc_norm": 0.75, "acc_norm_stderr": 0.03621034121889507 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.56, "acc_stderr": 0.049888765156985884, "acc_norm": 0.56, "acc_norm_stderr": 0.049888765156985884 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.35, "acc_stderr": 0.04793724854411018, "acc_norm": 0.35, "acc_norm_stderr": 0.04793724854411018 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6358381502890174, "acc_stderr": 0.03669072477416906, "acc_norm": 0.6358381502890174, "acc_norm_stderr": 0.03669072477416906 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4117647058823529, "acc_stderr": 0.048971049527263666, "acc_norm": 0.4117647058823529, "acc_norm_stderr": 0.048971049527263666 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.8, "acc_stderr": 0.04020151261036845, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036845 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5957446808510638, "acc_stderr": 0.03208115750788684, "acc_norm": 0.5957446808510638, "acc_norm_stderr": 0.03208115750788684 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4824561403508772, "acc_stderr": 0.04700708033551038, "acc_norm": 0.4824561403508772, "acc_norm_stderr": 0.04700708033551038 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5448275862068965, "acc_stderr": 0.04149886942192117, "acc_norm": 0.5448275862068965, "acc_norm_stderr": 0.04149886942192117 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4126984126984127, "acc_stderr": 0.025355741263055256, "acc_norm": 0.4126984126984127, "acc_norm_stderr": 0.025355741263055256 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.42063492063492064, "acc_stderr": 0.04415438226743744, "acc_norm": 0.42063492063492064, "acc_norm_stderr": 0.04415438226743744 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.35, "acc_stderr": 0.047937248544110196, "acc_norm": 0.35, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7806451612903226, "acc_stderr": 0.023540799358723295, "acc_norm": 0.7806451612903226, "acc_norm_stderr": 0.023540799358723295 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5172413793103449, "acc_stderr": 0.035158955511656986, "acc_norm": 0.5172413793103449, "acc_norm_stderr": 0.035158955511656986 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.68, "acc_stderr": 0.04688261722621504, "acc_norm": 0.68, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7515151515151515, "acc_stderr": 0.033744026441394036, "acc_norm": 0.7515151515151515, "acc_norm_stderr": 0.033744026441394036 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7828282828282829, "acc_stderr": 0.029376616484945627, "acc_norm": 0.7828282828282829, "acc_norm_stderr": 0.029376616484945627 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8808290155440415, "acc_stderr": 0.023381935348121437, "acc_norm": 0.8808290155440415, "acc_norm_stderr": 0.023381935348121437 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6641025641025641, "acc_stderr": 0.023946724741563976, "acc_norm": 0.6641025641025641, "acc_norm_stderr": 0.023946724741563976 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3592592592592593, "acc_stderr": 0.029252905927251972, "acc_norm": 0.3592592592592593, "acc_norm_stderr": 0.029252905927251972 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.680672268907563, "acc_stderr": 0.030283995525884396, "acc_norm": 0.680672268907563, "acc_norm_stderr": 0.030283995525884396 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.31788079470198677, "acc_stderr": 0.038020397601079024, "acc_norm": 0.31788079470198677, "acc_norm_stderr": 0.038020397601079024 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8477064220183487, "acc_stderr": 0.015405084393157074, "acc_norm": 0.8477064220183487, "acc_norm_stderr": 0.015405084393157074 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5185185185185185, "acc_stderr": 0.034076320938540516, "acc_norm": 0.5185185185185185, "acc_norm_stderr": 0.034076320938540516 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7843137254901961, "acc_stderr": 0.028867431449849313, "acc_norm": 0.7843137254901961, "acc_norm_stderr": 0.028867431449849313 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7890295358649789, "acc_stderr": 0.026558372502661916, "acc_norm": 0.7890295358649789, "acc_norm_stderr": 0.026558372502661916 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6860986547085202, "acc_stderr": 0.031146796482972465, "acc_norm": 0.6860986547085202, "acc_norm_stderr": 0.031146796482972465 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7786259541984732, "acc_stderr": 0.0364129708131373, "acc_norm": 0.7786259541984732, "acc_norm_stderr": 0.0364129708131373 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8099173553719008, "acc_stderr": 0.03581796951709282, "acc_norm": 0.8099173553719008, "acc_norm_stderr": 0.03581796951709282 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8055555555555556, "acc_stderr": 0.038260763248848646, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.038260763248848646 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.754601226993865, "acc_stderr": 0.03380939813943354, "acc_norm": 0.754601226993865, "acc_norm_stderr": 0.03380939813943354 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.49107142857142855, "acc_stderr": 0.04745033255489123, "acc_norm": 0.49107142857142855, "acc_norm_stderr": 0.04745033255489123 }, "harness|hendrycksTest-management|5": { "acc": 0.7766990291262136, "acc_stderr": 0.04123553189891431, "acc_norm": 0.7766990291262136, "acc_norm_stderr": 0.04123553189891431 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8803418803418803, "acc_stderr": 0.021262719400406964, "acc_norm": 0.8803418803418803, "acc_norm_stderr": 0.021262719400406964 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.74, "acc_stderr": 0.04408440022768079, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768079 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8212005108556832, "acc_stderr": 0.013702643715368982, "acc_norm": 0.8212005108556832, "acc_norm_stderr": 0.013702643715368982 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7196531791907514, "acc_stderr": 0.024182427496577615, "acc_norm": 0.7196531791907514, "acc_norm_stderr": 0.024182427496577615 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.38324022346368714, "acc_stderr": 0.016260159604429128, "acc_norm": 0.38324022346368714, "acc_norm_stderr": 0.016260159604429128 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7254901960784313, "acc_stderr": 0.02555316999182652, "acc_norm": 0.7254901960784313, "acc_norm_stderr": 0.02555316999182652 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7009646302250804, "acc_stderr": 0.02600330111788514, "acc_norm": 0.7009646302250804, "acc_norm_stderr": 0.02600330111788514 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7407407407407407, "acc_stderr": 0.024383665531035454, "acc_norm": 0.7407407407407407, "acc_norm_stderr": 0.024383665531035454 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.48226950354609927, "acc_stderr": 0.02980873964223777, "acc_norm": 0.48226950354609927, "acc_norm_stderr": 0.02980873964223777 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4511082138200782, "acc_stderr": 0.012709037347346233, "acc_norm": 0.4511082138200782, "acc_norm_stderr": 0.012709037347346233 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6617647058823529, "acc_stderr": 0.028739328513983572, "acc_norm": 0.6617647058823529, "acc_norm_stderr": 0.028739328513983572 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6715686274509803, "acc_stderr": 0.018999707383162673, "acc_norm": 0.6715686274509803, "acc_norm_stderr": 0.018999707383162673 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6636363636363637, "acc_stderr": 0.04525393596302506, "acc_norm": 0.6636363636363637, "acc_norm_stderr": 0.04525393596302506 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7428571428571429, "acc_stderr": 0.02797982353874455, "acc_norm": 0.7428571428571429, "acc_norm_stderr": 0.02797982353874455 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8557213930348259, "acc_stderr": 0.02484575321230604, "acc_norm": 0.8557213930348259, "acc_norm_stderr": 0.02484575321230604 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.034873508801977704, "acc_norm": 0.86, "acc_norm_stderr": 0.034873508801977704 }, "harness|hendrycksTest-virology|5": { "acc": 0.5421686746987951, "acc_stderr": 0.0387862677100236, "acc_norm": 0.5421686746987951, "acc_norm_stderr": 0.0387862677100236 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.847953216374269, "acc_stderr": 0.027539122889061463, "acc_norm": 0.847953216374269, "acc_norm_stderr": 0.027539122889061463 }, "harness|truthfulqa:mc|0": { "mc1": 0.39412484700122397, "mc1_stderr": 0.017106588140700322, "mc2": 0.5513669244614883, "mc2_stderr": 0.015335304188531462 }, "harness|winogrande|5": { "acc": 0.7908445146014207, "acc_stderr": 0.011430450045881573 }, "harness|gsm8k|5": { "acc": 0.7103866565579985, "acc_stderr": 0.012493927348659629 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Weyaxi__MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp
[ "region:us" ]
2024-01-08T05:13:55+00:00
{"pretty_name": "Evaluation run of Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-08T05:11:37.271243](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp/blob/main/results_2024-01-08T05-11-37.271243.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6464533975388377,\n \"acc_stderr\": 0.032163810731246786,\n \"acc_norm\": 0.6464814911400231,\n \"acc_norm_stderr\": 0.03282564461917708,\n \"mc1\": 0.39412484700122397,\n \"mc1_stderr\": 0.017106588140700322,\n \"mc2\": 0.5513669244614883,\n \"mc2_stderr\": 0.015335304188531462\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6254266211604096,\n \"acc_stderr\": 0.014144193471893449,\n \"acc_norm\": 0.6459044368600683,\n \"acc_norm_stderr\": 0.013975454122756562\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6642103166699861,\n \"acc_stderr\": 0.004713006072807707,\n \"acc_norm\": 0.8537143995220076,\n \"acc_norm_stderr\": 0.0035267007418794435\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.046482319871173156,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.046482319871173156\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7132075471698113,\n \"acc_stderr\": 0.02783491252754406,\n \"acc_norm\": 0.7132075471698113,\n \"acc_norm_stderr\": 0.02783491252754406\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.049888765156985884,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.049888765156985884\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411018,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411018\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6358381502890174,\n \"acc_stderr\": 0.03669072477416906,\n \"acc_norm\": 0.6358381502890174,\n \"acc_norm_stderr\": 0.03669072477416906\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5957446808510638,\n \"acc_stderr\": 0.03208115750788684,\n \"acc_norm\": 0.5957446808510638,\n \"acc_norm_stderr\": 0.03208115750788684\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4126984126984127,\n \"acc_stderr\": 0.025355741263055256,\n \"acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.025355741263055256\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n \"acc_stderr\": 0.023540799358723295,\n \"acc_norm\": 0.7806451612903226,\n \"acc_norm_stderr\": 0.023540799358723295\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.029376616484945627,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.029376616484945627\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8808290155440415,\n \"acc_stderr\": 0.023381935348121437,\n \"acc_norm\": 0.8808290155440415,\n \"acc_norm_stderr\": 0.023381935348121437\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3592592592592593,\n \"acc_stderr\": 0.029252905927251972,\n \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.029252905927251972\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.680672268907563,\n \"acc_stderr\": 0.030283995525884396,\n \"acc_norm\": 0.680672268907563,\n \"acc_norm_stderr\": 0.030283995525884396\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31788079470198677,\n \"acc_stderr\": 0.038020397601079024,\n \"acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.038020397601079024\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5185185185185185,\n \"acc_stderr\": 0.034076320938540516,\n \"acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.034076320938540516\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7843137254901961,\n \"acc_stderr\": 0.028867431449849313,\n \"acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.028867431449849313\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7890295358649789,\n \"acc_stderr\": 0.026558372502661916,\n \"acc_norm\": 0.7890295358649789,\n \"acc_norm_stderr\": 0.026558372502661916\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.0364129708131373,\n \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.0364129708131373\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8212005108556832,\n \"acc_stderr\": 0.013702643715368982,\n \"acc_norm\": 0.8212005108556832,\n \"acc_norm_stderr\": 0.013702643715368982\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7196531791907514,\n \"acc_stderr\": 0.024182427496577615,\n \"acc_norm\": 0.7196531791907514,\n \"acc_norm_stderr\": 0.024182427496577615\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.38324022346368714,\n \"acc_stderr\": 0.016260159604429128,\n \"acc_norm\": 0.38324022346368714,\n \"acc_norm_stderr\": 0.016260159604429128\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7254901960784313,\n \"acc_stderr\": 0.02555316999182652,\n \"acc_norm\": 0.7254901960784313,\n \"acc_norm_stderr\": 0.02555316999182652\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n \"acc_stderr\": 0.02600330111788514,\n \"acc_norm\": 0.7009646302250804,\n \"acc_norm_stderr\": 0.02600330111788514\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.024383665531035454,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.024383665531035454\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48226950354609927,\n \"acc_stderr\": 0.02980873964223777,\n \"acc_norm\": 0.48226950354609927,\n \"acc_norm_stderr\": 0.02980873964223777\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4511082138200782,\n \"acc_stderr\": 0.012709037347346233,\n \"acc_norm\": 0.4511082138200782,\n \"acc_norm_stderr\": 0.012709037347346233\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6617647058823529,\n \"acc_stderr\": 0.028739328513983572,\n \"acc_norm\": 0.6617647058823529,\n \"acc_norm_stderr\": 0.028739328513983572\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6715686274509803,\n \"acc_stderr\": 0.018999707383162673,\n \"acc_norm\": 0.6715686274509803,\n \"acc_norm_stderr\": 0.018999707383162673\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7428571428571429,\n \"acc_stderr\": 0.02797982353874455,\n \"acc_norm\": 0.7428571428571429,\n \"acc_norm_stderr\": 0.02797982353874455\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8557213930348259,\n \"acc_stderr\": 0.02484575321230604,\n \"acc_norm\": 0.8557213930348259,\n \"acc_norm_stderr\": 0.02484575321230604\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.034873508801977704,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.034873508801977704\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.847953216374269,\n \"acc_stderr\": 0.027539122889061463,\n \"acc_norm\": 0.847953216374269,\n \"acc_norm_stderr\": 0.027539122889061463\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.39412484700122397,\n \"mc1_stderr\": 0.017106588140700322,\n \"mc2\": 0.5513669244614883,\n \"mc2_stderr\": 0.015335304188531462\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7908445146014207,\n \"acc_stderr\": 0.011430450045881573\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7103866565579985,\n \"acc_stderr\": 0.012493927348659629\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|arc:challenge|25_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|gsm8k|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hellaswag|10_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-08T05-11-37.271243.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["**/details_harness|winogrande|5_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-08T05-11-37.271243.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_08T05_11_37.271243", "path": ["results_2024-01-08T05-11-37.271243.parquet"]}, {"split": "latest", "path": ["results_2024-01-08T05-11-37.271243.parquet"]}]}]}
2024-01-08T05:14:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp Dataset automatically created during the evaluation run of model Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-08T05:11:37.271243(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp\n\n\n\nDataset automatically created during the evaluation run of model Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-08T05:11:37.271243(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp\n\n\n\nDataset automatically created during the evaluation run of model Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-08T05:11:37.271243(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 211, 68, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp\n\n\n\nDataset automatically created during the evaluation run of model Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-08T05:11:37.271243(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:" ]
fbc86cb47a39a9465b9235622014b33f8ca9bb7f
# Dataset Card for "competition_math_hf_dataset" * **Homepage** - https://huggingface.co/datasets/hendrycks/competition_math This is just the competetion math dataset, put in HF dataset format for ease of use with any finetuning tasks
SuperSecureHuman/competition_math_hf_dataset
[ "region:us" ]
2024-01-08T05:16:14+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "problem", "dtype": "string"}, {"name": "level", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "solution", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5984772, "num_examples": 7500}, {"name": "test", "num_bytes": 3732833, "num_examples": 5000}], "download_size": 4866025, "dataset_size": 9717605}}
2024-01-08T05:19:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for "competition_math_hf_dataset" * Homepage - URL This is just the competetion math dataset, put in HF dataset format for ease of use with any finetuning tasks
[ "# Dataset Card for \"competition_math_hf_dataset\"\n\n* Homepage - URL\n\nThis is just the competetion math dataset, put in HF dataset format for ease of use with any finetuning tasks" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"competition_math_hf_dataset\"\n\n* Homepage - URL\n\nThis is just the competetion math dataset, put in HF dataset format for ease of use with any finetuning tasks" ]
[ 6, 51 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"competition_math_hf_dataset\"\n\n* Homepage - URL\n\nThis is just the competetion math dataset, put in HF dataset format for ease of use with any finetuning tasks" ]
e1c6481684f34befa26d608aec159ddaaacc34ca
# Dataset Card for "Vivos_CommonVoice_FOSD_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tuanmanh28/Vivos_CommonVoice_FOSD_dataset
[ "region:us" ]
2024-01-08T05:38:09+00:00
{"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2479634968.0, "num_examples": 37513}, {"name": "test", "num_bytes": 186204109.69129592, "num_examples": 4590}], "download_size": 2699291639, "dataset_size": 2665839077.691296}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-08T06:53:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Vivos_CommonVoice_FOSD_dataset" More Information needed
[ "# Dataset Card for \"Vivos_CommonVoice_FOSD_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Vivos_CommonVoice_FOSD_dataset\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Vivos_CommonVoice_FOSD_dataset\"\n\nMore Information needed" ]
3abc5a0e6cc550d73ae6762acbb52a9e69ef07db
## wiki-burmese-sentences Simbolo wiki data is derived from the extensive collection of Wikipedia articles and serves diverse purposes, with a primary focus on text generation. The main contributor to this dataset is Sa Phyo Thu Htet, with valuable contributions also made by other dedicated individuals, including Kaung Khant Ko Ko and Phuu Pwint Thinzar Kyaing. This collaborative effort has resulted in a rich and comprehensive dataset that can be utilized across various applications in the realm of natural language processing and text-based tasks. It contains 100,000 burmese sentences. from datasets import load_dataset ### How to use dataset = load_dataset("Simbolo-Servicio/wiki-burmese-sentences") ### Acknowledgment We would like to thank Simbolo Servicio which is a branch of Simbolo under the Intello Tech Company for the financial support for this work. And thanks creators of Wikipedia and contributors of the articles on the Wikipedia Platform. ### Cite ```bibtex @misc{wiki-burmese-sentences, author = {{Sa Phyo Thu Htet}}, title = {Wiki Burmese Sentences}, url = {https://huggingface.co/datasets/Simbolo-Servicio/wiki-burmese-sentences}, urldate = {2024-1-09}, date = {2024-1-09} } ```
simbolo-ai/wiki-burmese-sentences
[ "license:mit", "region:us" ]
2024-01-08T05:40:09+00:00
{"license": "mit"}
2024-02-09T09:25:52+00:00
[]
[]
TAGS #license-mit #region-us
## wiki-burmese-sentences Simbolo wiki data is derived from the extensive collection of Wikipedia articles and serves diverse purposes, with a primary focus on text generation. The main contributor to this dataset is Sa Phyo Thu Htet, with valuable contributions also made by other dedicated individuals, including Kaung Khant Ko Ko and Phuu Pwint Thinzar Kyaing. This collaborative effort has resulted in a rich and comprehensive dataset that can be utilized across various applications in the realm of natural language processing and text-based tasks. It contains 100,000 burmese sentences. from datasets import load_dataset ### How to use dataset = load_dataset("Simbolo-Servicio/wiki-burmese-sentences") ### Acknowledgment We would like to thank Simbolo Servicio which is a branch of Simbolo under the Intello Tech Company for the financial support for this work. And thanks creators of Wikipedia and contributors of the articles on the Wikipedia Platform. ### Cite
[ "## wiki-burmese-sentences\n\nSimbolo wiki data is derived from the extensive collection of Wikipedia articles and serves diverse purposes, with a primary focus on text generation. The main contributor to this dataset is Sa Phyo Thu Htet, with valuable contributions also made by other dedicated individuals, including Kaung Khant Ko Ko and Phuu Pwint Thinzar Kyaing. This collaborative effort has resulted in a rich and comprehensive dataset that can be utilized across various applications in the realm of natural language processing and text-based tasks.\nIt contains 100,000 burmese sentences.\nfrom datasets import load_dataset", "### How to use\ndataset = load_dataset(\"Simbolo-Servicio/wiki-burmese-sentences\")", "### Acknowledgment\nWe would like to thank Simbolo Servicio which is a branch of Simbolo under the Intello Tech Company for the financial support for this work. And thanks creators of Wikipedia and contributors of the articles on the Wikipedia Platform.", "### Cite" ]
[ "TAGS\n#license-mit #region-us \n", "## wiki-burmese-sentences\n\nSimbolo wiki data is derived from the extensive collection of Wikipedia articles and serves diverse purposes, with a primary focus on text generation. The main contributor to this dataset is Sa Phyo Thu Htet, with valuable contributions also made by other dedicated individuals, including Kaung Khant Ko Ko and Phuu Pwint Thinzar Kyaing. This collaborative effort has resulted in a rich and comprehensive dataset that can be utilized across various applications in the realm of natural language processing and text-based tasks.\nIt contains 100,000 burmese sentences.\nfrom datasets import load_dataset", "### How to use\ndataset = load_dataset(\"Simbolo-Servicio/wiki-burmese-sentences\")", "### Acknowledgment\nWe would like to thank Simbolo Servicio which is a branch of Simbolo under the Intello Tech Company for the financial support for this work. And thanks creators of Wikipedia and contributors of the articles on the Wikipedia Platform.", "### Cite" ]
[ 11, 143, 29, 54, 4 ]
[ "passage: TAGS\n#license-mit #region-us \n## wiki-burmese-sentences\n\nSimbolo wiki data is derived from the extensive collection of Wikipedia articles and serves diverse purposes, with a primary focus on text generation. The main contributor to this dataset is Sa Phyo Thu Htet, with valuable contributions also made by other dedicated individuals, including Kaung Khant Ko Ko and Phuu Pwint Thinzar Kyaing. This collaborative effort has resulted in a rich and comprehensive dataset that can be utilized across various applications in the realm of natural language processing and text-based tasks.\nIt contains 100,000 burmese sentences.\nfrom datasets import load_dataset### How to use\ndataset = load_dataset(\"Simbolo-Servicio/wiki-burmese-sentences\")### Acknowledgment\nWe would like to thank Simbolo Servicio which is a branch of Simbolo under the Intello Tech Company for the financial support for this work. And thanks creators of Wikipedia and contributors of the articles on the Wikipedia Platform.### Cite" ]
4c21e09eab5c5480d71567689323c6b557261c1b
# Dataset Card for github_issues ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
ilbaks/github-issues
[ "task_categories:question-answering", "language:en", "region:us" ]
2024-01-08T05:47:35+00:00
{"language": ["en"], "task_categories": ["question-answering"], "dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "labels", "list": [{"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "assignees", "list": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "milestone", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "creator", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "open_issues", "dtype": "int64"}, {"name": "closed_issues", "dtype": "int64"}, {"name": "state", "dtype": "string"}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "updated_at", "dtype": "timestamp[s]"}, {"name": "due_on", "dtype": "null"}, {"name": "closed_at", "dtype": "null"}]}, {"name": "comments", "sequence": "string"}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "updated_at", "dtype": "timestamp[s]"}, {"name": "closed_at", "dtype": "timestamp[s]"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "null"}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "url", "dtype": "string"}, {"name": "total_count", "dtype": "int64"}, {"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "null"}, {"name": "state_reason", "dtype": "string"}, {"name": "draft", "dtype": "bool"}, {"name": "pull_request", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "diff_url", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "merged_at", "dtype": "timestamp[s]"}]}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 12273341, "num_examples": 1000}], "download_size": 3470168, "dataset_size": 12273341}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-08T06:16:59+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #language-English #region-us
# Dataset Card for github_issues ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @github-username for adding this dataset.
[ "# Dataset Card for github_issues", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #language-English #region-us \n", "# Dataset Card for github_issues", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
[ 22, 10, 125, 24, 6, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 19 ]
[ "passage: TAGS\n#task_categories-question-answering #language-English #region-us \n# Dataset Card for github_issues## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @github-username for adding this dataset." ]
f03889674eae9facaf17285ab0650e636259d58d
# Dataset Card for "uf_unsafe_v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yimingzhang/uf_unsafe_v1
[ "region:us" ]
2024-01-08T06:05:48+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train_prefs", "path": "data/train_prefs-*"}, {"split": "test_prefs", "path": "data/test_prefs-*"}]}], "dataset_info": {"features": [{"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train_prefs", "num_bytes": 1652692, "num_examples": 3532}, {"name": "test_prefs", "num_bytes": 82728, "num_examples": 172}], "download_size": 829493, "dataset_size": 1735420}}
2024-01-08T06:13:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "uf_unsafe_v1" More Information needed
[ "# Dataset Card for \"uf_unsafe_v1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"uf_unsafe_v1\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"uf_unsafe_v1\"\n\nMore Information needed" ]
bc8f648ad8f10bd87d5581b532edad33f41c2b92
# Dataset Card for FudanSELab CodeGen4Libs Code Retrieval Library ## Dataset Description - **Repository:** [GitHub Repository](https://github.com/FudanSELab/codegen4libs) - **Paper:** [CodeGen4Libs: A Two-stage Approach for Library-oriented Code Generation](https://mingwei-liu.github.io/publication/2023-08-18-ase-CodeGen4Libs) ### Dataset Summary This dataset is the code retrieval library used in the ASE2023 paper titled ["CodeGen4Libs: A Two-stage Approach for Library-oriented Code Generation"](https://mingwei-liu.github.io/publication/2023-08-18-ase-CodeGen4Libs). ## Additional Information ### Citation Information ``` @inproceedings{ase2023codegen4libs, author = {Mingwei Liu and Tianyong Yang and Yiling Lou and Xueying Du and Ying Wang and and Xin Peng}, title = {{CodeGen4Libs}: A Two-stage Approach for Library-oriented Code Generation}, booktitle = {38th {IEEE/ACM} International Conference on Automated Software Engineering, {ASE} 2023, Kirchberg, Luxembourg, September 11-15, 2023}, pages = {0--0}, publisher = {{IEEE}}, year = {2023}, } ```
FudanSELab/CodeGen4Libs_RetrievalCodeLib
[ "size_categories:1M<n<10M", "license:mit", "code-generation", "region:us" ]
2024-01-08T07:07:55+00:00
{"license": "mit", "size_categories": ["1M<n<10M"], "pretty_name": "CodeGen4Libs ", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "method", "dtype": "string"}, {"name": "clean_method", "dtype": "string"}, {"name": "doc", "dtype": "string"}, {"name": "comment", "dtype": "string"}, {"name": "method_name", "dtype": "string"}, {"name": "extra", "struct": [{"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "size", "dtype": "int64"}]}, {"name": "imports", "sequence": "string"}, {"name": "imports_info", "dtype": "string"}, {"name": "cluster_imports_info", "dtype": "string"}, {"name": "libraries", "sequence": "string"}, {"name": "libraries_info", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5373034269, "num_examples": 2916582}], "download_size": 2492962682, "dataset_size": 5373034269}, "tags": ["code-generation"]}
2024-01-08T08:53:39+00:00
[]
[]
TAGS #size_categories-1M<n<10M #license-mit #code-generation #region-us
# Dataset Card for FudanSELab CodeGen4Libs Code Retrieval Library ## Dataset Description - Repository: GitHub Repository - Paper: CodeGen4Libs: A Two-stage Approach for Library-oriented Code Generation ### Dataset Summary This dataset is the code retrieval library used in the ASE2023 paper titled "CodeGen4Libs: A Two-stage Approach for Library-oriented Code Generation". ## Additional Information
[ "# Dataset Card for FudanSELab CodeGen4Libs Code Retrieval Library", "## Dataset Description\n\n- Repository: GitHub Repository\n- Paper: CodeGen4Libs: A Two-stage Approach for Library-oriented Code Generation", "### Dataset Summary\n\nThis dataset is the code retrieval library used in the ASE2023 paper titled \"CodeGen4Libs: A Two-stage Approach for Library-oriented Code Generation\".", "## Additional Information" ]
[ "TAGS\n#size_categories-1M<n<10M #license-mit #code-generation #region-us \n", "# Dataset Card for FudanSELab CodeGen4Libs Code Retrieval Library", "## Dataset Description\n\n- Repository: GitHub Repository\n- Paper: CodeGen4Libs: A Two-stage Approach for Library-oriented Code Generation", "### Dataset Summary\n\nThis dataset is the code retrieval library used in the ASE2023 paper titled \"CodeGen4Libs: A Two-stage Approach for Library-oriented Code Generation\".", "## Additional Information" ]
[ 28, 19, 39, 50, 5 ]
[ "passage: TAGS\n#size_categories-1M<n<10M #license-mit #code-generation #region-us \n# Dataset Card for FudanSELab CodeGen4Libs Code Retrieval Library## Dataset Description\n\n- Repository: GitHub Repository\n- Paper: CodeGen4Libs: A Two-stage Approach for Library-oriented Code Generation### Dataset Summary\n\nThis dataset is the code retrieval library used in the ASE2023 paper titled \"CodeGen4Libs: A Two-stage Approach for Library-oriented Code Generation\".## Additional Information" ]
b4ddac7838666147ac4ed78cf4300e402c32fa05
# Dataset Card for Evaluation run of chargoddard/mistral-11b-slimorca <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [chargoddard/mistral-11b-slimorca](https://huggingface.co/chargoddard/mistral-11b-slimorca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_chargoddard__mistral-11b-slimorca", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-08T07:32:00.985160](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__mistral-11b-slimorca/blob/main/results_2024-01-08T07-32-00.985160.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6367892367382408, "acc_stderr": 0.032420458743968754, "acc_norm": 0.6401126581266797, "acc_norm_stderr": 0.03306588858456459, "mc1": 0.38310893512851896, "mc1_stderr": 0.017018461679389855, "mc2": 0.5466386336115909, "mc2_stderr": 0.015507674046261742 }, "harness|arc:challenge|25": { "acc": 0.6075085324232082, "acc_stderr": 0.01426963463567073, "acc_norm": 0.6424914675767918, "acc_norm_stderr": 0.014005494275916573 }, "harness|hellaswag|10": { "acc": 0.6468830910177256, "acc_stderr": 0.004769618829196511, "acc_norm": 0.8380800637323242, "acc_norm_stderr": 0.0036762448867232586 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.35, "acc_stderr": 0.04793724854411021, "acc_norm": 0.35, "acc_norm_stderr": 0.04793724854411021 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6148148148148148, "acc_stderr": 0.04203921040156279, "acc_norm": 0.6148148148148148, "acc_norm_stderr": 0.04203921040156279 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6776315789473685, "acc_stderr": 0.03803510248351585, "acc_norm": 0.6776315789473685, "acc_norm_stderr": 0.03803510248351585 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.6, "acc_stderr": 0.04923659639173309, "acc_norm": 0.6, "acc_norm_stderr": 0.04923659639173309 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.690566037735849, "acc_stderr": 0.028450154794118637, "acc_norm": 0.690566037735849, "acc_norm_stderr": 0.028450154794118637 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7222222222222222, "acc_stderr": 0.037455547914624555, "acc_norm": 0.7222222222222222, "acc_norm_stderr": 0.037455547914624555 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6069364161849711, "acc_stderr": 0.0372424959581773, "acc_norm": 0.6069364161849711, "acc_norm_stderr": 0.0372424959581773 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4215686274509804, "acc_stderr": 0.04913595201274498, "acc_norm": 0.4215686274509804, "acc_norm_stderr": 0.04913595201274498 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.76, "acc_stderr": 0.042923469599092816, "acc_norm": 0.76, "acc_norm_stderr": 0.042923469599092816 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5702127659574469, "acc_stderr": 0.03236214467715564, "acc_norm": 0.5702127659574469, "acc_norm_stderr": 0.03236214467715564 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4824561403508772, "acc_stderr": 0.04700708033551038, "acc_norm": 0.4824561403508772, "acc_norm_stderr": 0.04700708033551038 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5310344827586206, "acc_stderr": 0.04158632762097828, "acc_norm": 0.5310344827586206, "acc_norm_stderr": 0.04158632762097828 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.41005291005291006, "acc_stderr": 0.02533120243894443, "acc_norm": 0.41005291005291006, "acc_norm_stderr": 0.02533120243894443 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.46825396825396826, "acc_stderr": 0.04463112720677172, "acc_norm": 0.46825396825396826, "acc_norm_stderr": 0.04463112720677172 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.33, "acc_stderr": 0.04725815626252604, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252604 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7709677419354839, "acc_stderr": 0.023904914311782658, "acc_norm": 0.7709677419354839, "acc_norm_stderr": 0.023904914311782658 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5073891625615764, "acc_stderr": 0.035176035403610105, "acc_norm": 0.5073891625615764, "acc_norm_stderr": 0.035176035403610105 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.67, "acc_stderr": 0.04725815626252609, "acc_norm": 0.67, "acc_norm_stderr": 0.04725815626252609 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7515151515151515, "acc_stderr": 0.033744026441394036, "acc_norm": 0.7515151515151515, "acc_norm_stderr": 0.033744026441394036 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8232323232323232, "acc_stderr": 0.027178752639044915, "acc_norm": 0.8232323232323232, "acc_norm_stderr": 0.027178752639044915 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8860103626943006, "acc_stderr": 0.022935144053919443, "acc_norm": 0.8860103626943006, "acc_norm_stderr": 0.022935144053919443 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6641025641025641, "acc_stderr": 0.023946724741563973, "acc_norm": 0.6641025641025641, "acc_norm_stderr": 0.023946724741563973 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.31851851851851853, "acc_stderr": 0.02840653309060846, "acc_norm": 0.31851851851851853, "acc_norm_stderr": 0.02840653309060846 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6848739495798319, "acc_stderr": 0.030176808288974337, "acc_norm": 0.6848739495798319, "acc_norm_stderr": 0.030176808288974337 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3509933774834437, "acc_stderr": 0.03896981964257375, "acc_norm": 0.3509933774834437, "acc_norm_stderr": 0.03896981964257375 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8477064220183487, "acc_stderr": 0.015405084393157074, "acc_norm": 0.8477064220183487, "acc_norm_stderr": 0.015405084393157074 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5046296296296297, "acc_stderr": 0.03409825519163572, "acc_norm": 0.5046296296296297, "acc_norm_stderr": 0.03409825519163572 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8333333333333334, "acc_stderr": 0.026156867523931045, "acc_norm": 0.8333333333333334, "acc_norm_stderr": 0.026156867523931045 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7848101265822784, "acc_stderr": 0.026750826994676177, "acc_norm": 0.7848101265822784, "acc_norm_stderr": 0.026750826994676177 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.695067264573991, "acc_stderr": 0.030898610882477515, "acc_norm": 0.695067264573991, "acc_norm_stderr": 0.030898610882477515 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7709923664122137, "acc_stderr": 0.036853466317118506, "acc_norm": 0.7709923664122137, "acc_norm_stderr": 0.036853466317118506 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7768595041322314, "acc_stderr": 0.03800754475228733, "acc_norm": 0.7768595041322314, "acc_norm_stderr": 0.03800754475228733 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8055555555555556, "acc_stderr": 0.038260763248848646, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.038260763248848646 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7668711656441718, "acc_stderr": 0.0332201579577674, "acc_norm": 0.7668711656441718, "acc_norm_stderr": 0.0332201579577674 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4375, "acc_stderr": 0.04708567521880525, "acc_norm": 0.4375, "acc_norm_stderr": 0.04708567521880525 }, "harness|hendrycksTest-management|5": { "acc": 0.7766990291262136, "acc_stderr": 0.04123553189891431, "acc_norm": 0.7766990291262136, "acc_norm_stderr": 0.04123553189891431 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8547008547008547, "acc_stderr": 0.0230866350868414, "acc_norm": 0.8547008547008547, "acc_norm_stderr": 0.0230866350868414 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.72, "acc_stderr": 0.045126085985421276, "acc_norm": 0.72, "acc_norm_stderr": 0.045126085985421276 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8301404853128991, "acc_stderr": 0.013428186370608311, "acc_norm": 0.8301404853128991, "acc_norm_stderr": 0.013428186370608311 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7167630057803468, "acc_stderr": 0.024257901705323378, "acc_norm": 0.7167630057803468, "acc_norm_stderr": 0.024257901705323378 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3541899441340782, "acc_stderr": 0.015995644947299232, "acc_norm": 0.3541899441340782, "acc_norm_stderr": 0.015995644947299232 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7352941176470589, "acc_stderr": 0.02526169121972948, "acc_norm": 0.7352941176470589, "acc_norm_stderr": 0.02526169121972948 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6816720257234726, "acc_stderr": 0.026457225067811025, "acc_norm": 0.6816720257234726, "acc_norm_stderr": 0.026457225067811025 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7345679012345679, "acc_stderr": 0.024569223600460845, "acc_norm": 0.7345679012345679, "acc_norm_stderr": 0.024569223600460845 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4787234042553192, "acc_stderr": 0.029800481645628693, "acc_norm": 0.4787234042553192, "acc_norm_stderr": 0.029800481645628693 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4530638852672751, "acc_stderr": 0.012713845972358981, "acc_norm": 0.4530638852672751, "acc_norm_stderr": 0.012713845972358981 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6691176470588235, "acc_stderr": 0.02858270975389845, "acc_norm": 0.6691176470588235, "acc_norm_stderr": 0.02858270975389845 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6715686274509803, "acc_stderr": 0.01899970738316267, "acc_norm": 0.6715686274509803, "acc_norm_stderr": 0.01899970738316267 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6454545454545455, "acc_stderr": 0.045820048415054174, "acc_norm": 0.6454545454545455, "acc_norm_stderr": 0.045820048415054174 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.726530612244898, "acc_stderr": 0.028535560337128445, "acc_norm": 0.726530612244898, "acc_norm_stderr": 0.028535560337128445 }, "harness|hendrycksTest-sociology|5": { "acc": 0.835820895522388, "acc_stderr": 0.026193923544454125, "acc_norm": 0.835820895522388, "acc_norm_stderr": 0.026193923544454125 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.84, "acc_stderr": 0.03684529491774708, "acc_norm": 0.84, "acc_norm_stderr": 0.03684529491774708 }, "harness|hendrycksTest-virology|5": { "acc": 0.5180722891566265, "acc_stderr": 0.03889951252827216, "acc_norm": 0.5180722891566265, "acc_norm_stderr": 0.03889951252827216 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8070175438596491, "acc_stderr": 0.030267457554898458, "acc_norm": 0.8070175438596491, "acc_norm_stderr": 0.030267457554898458 }, "harness|truthfulqa:mc|0": { "mc1": 0.38310893512851896, "mc1_stderr": 0.017018461679389855, "mc2": 0.5466386336115909, "mc2_stderr": 0.015507674046261742 }, "harness|winogrande|5": { "acc": 0.7797947908445146, "acc_stderr": 0.011646276755089688 }, "harness|gsm8k|5": { "acc": 0.5238817285822593, "acc_stderr": 0.013756765835465755 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_chargoddard__mistral-11b-slimorca
[ "region:us" ]
2024-01-08T07:34:15+00:00
{"pretty_name": "Evaluation run of chargoddard/mistral-11b-slimorca", "dataset_summary": "Dataset automatically created during the evaluation run of model [chargoddard/mistral-11b-slimorca](https://huggingface.co/chargoddard/mistral-11b-slimorca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chargoddard__mistral-11b-slimorca\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-08T07:32:00.985160](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__mistral-11b-slimorca/blob/main/results_2024-01-08T07-32-00.985160.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6367892367382408,\n \"acc_stderr\": 0.032420458743968754,\n \"acc_norm\": 0.6401126581266797,\n \"acc_norm_stderr\": 0.03306588858456459,\n \"mc1\": 0.38310893512851896,\n \"mc1_stderr\": 0.017018461679389855,\n \"mc2\": 0.5466386336115909,\n \"mc2_stderr\": 0.015507674046261742\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6075085324232082,\n \"acc_stderr\": 0.01426963463567073,\n \"acc_norm\": 0.6424914675767918,\n \"acc_norm_stderr\": 0.014005494275916573\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6468830910177256,\n \"acc_stderr\": 0.004769618829196511,\n \"acc_norm\": 0.8380800637323242,\n \"acc_norm_stderr\": 0.0036762448867232586\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411021,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411021\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.037455547914624555,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.037455547914624555\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6069364161849711,\n \"acc_stderr\": 0.0372424959581773,\n \"acc_norm\": 0.6069364161849711,\n \"acc_norm_stderr\": 0.0372424959581773\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5702127659574469,\n \"acc_stderr\": 0.03236214467715564,\n \"acc_norm\": 0.5702127659574469,\n \"acc_norm_stderr\": 0.03236214467715564\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5310344827586206,\n \"acc_stderr\": 0.04158632762097828,\n \"acc_norm\": 0.5310344827586206,\n \"acc_norm_stderr\": 0.04158632762097828\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41005291005291006,\n \"acc_stderr\": 0.02533120243894443,\n \"acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.02533120243894443\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n \"acc_stderr\": 0.04463112720677172,\n \"acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.04463112720677172\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7709677419354839,\n \"acc_stderr\": 0.023904914311782658,\n \"acc_norm\": 0.7709677419354839,\n \"acc_norm_stderr\": 0.023904914311782658\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252609,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252609\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8232323232323232,\n \"acc_stderr\": 0.027178752639044915,\n \"acc_norm\": 0.8232323232323232,\n \"acc_norm_stderr\": 0.027178752639044915\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919443,\n \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919443\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563973,\n \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563973\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.31851851851851853,\n \"acc_stderr\": 0.02840653309060846,\n \"acc_norm\": 0.31851851851851853,\n \"acc_norm_stderr\": 0.02840653309060846\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6848739495798319,\n \"acc_stderr\": 0.030176808288974337,\n \"acc_norm\": 0.6848739495798319,\n \"acc_norm_stderr\": 0.030176808288974337\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5046296296296297,\n \"acc_stderr\": 0.03409825519163572,\n \"acc_norm\": 0.5046296296296297,\n \"acc_norm_stderr\": 0.03409825519163572\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.026156867523931045,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.026156867523931045\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7848101265822784,\n \"acc_stderr\": 0.026750826994676177,\n \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.026750826994676177\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.0230866350868414,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.0230866350868414\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8301404853128991,\n \"acc_stderr\": 0.013428186370608311,\n \"acc_norm\": 0.8301404853128991,\n \"acc_norm_stderr\": 0.013428186370608311\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7167630057803468,\n \"acc_stderr\": 0.024257901705323378,\n \"acc_norm\": 0.7167630057803468,\n \"acc_norm_stderr\": 0.024257901705323378\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3541899441340782,\n \"acc_stderr\": 0.015995644947299232,\n \"acc_norm\": 0.3541899441340782,\n \"acc_norm_stderr\": 0.015995644947299232\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.02526169121972948,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.02526169121972948\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6816720257234726,\n \"acc_stderr\": 0.026457225067811025,\n \"acc_norm\": 0.6816720257234726,\n \"acc_norm_stderr\": 0.026457225067811025\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460845,\n \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460845\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4787234042553192,\n \"acc_stderr\": 0.029800481645628693,\n \"acc_norm\": 0.4787234042553192,\n \"acc_norm_stderr\": 0.029800481645628693\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4530638852672751,\n \"acc_stderr\": 0.012713845972358981,\n \"acc_norm\": 0.4530638852672751,\n \"acc_norm_stderr\": 0.012713845972358981\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6691176470588235,\n \"acc_stderr\": 0.02858270975389845,\n \"acc_norm\": 0.6691176470588235,\n \"acc_norm_stderr\": 0.02858270975389845\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6715686274509803,\n \"acc_stderr\": 0.01899970738316267,\n \"acc_norm\": 0.6715686274509803,\n \"acc_norm_stderr\": 0.01899970738316267\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128445,\n \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128445\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774708,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774708\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8070175438596491,\n \"acc_stderr\": 0.030267457554898458,\n \"acc_norm\": 0.8070175438596491,\n \"acc_norm_stderr\": 0.030267457554898458\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.38310893512851896,\n \"mc1_stderr\": 0.017018461679389855,\n \"mc2\": 0.5466386336115909,\n \"mc2_stderr\": 0.015507674046261742\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7797947908445146,\n \"acc_stderr\": 0.011646276755089688\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5238817285822593,\n \"acc_stderr\": 0.013756765835465755\n }\n}\n```", "repo_url": "https://huggingface.co/chargoddard/mistral-11b-slimorca", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|arc:challenge|25_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|gsm8k|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hellaswag|10_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-08T07-32-00.985160.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["**/details_harness|winogrande|5_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-08T07-32-00.985160.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_08T07_32_00.985160", "path": ["results_2024-01-08T07-32-00.985160.parquet"]}, {"split": "latest", "path": ["results_2024-01-08T07-32-00.985160.parquet"]}]}]}
2024-01-08T07:34:36+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of chargoddard/mistral-11b-slimorca Dataset automatically created during the evaluation run of model chargoddard/mistral-11b-slimorca on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-08T07:32:00.985160(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of chargoddard/mistral-11b-slimorca\n\n\n\nDataset automatically created during the evaluation run of model chargoddard/mistral-11b-slimorca on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-08T07:32:00.985160(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of chargoddard/mistral-11b-slimorca\n\n\n\nDataset automatically created during the evaluation run of model chargoddard/mistral-11b-slimorca on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-08T07:32:00.985160(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 187, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of chargoddard/mistral-11b-slimorca\n\n\n\nDataset automatically created during the evaluation run of model chargoddard/mistral-11b-slimorca on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-08T07:32:00.985160(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
f20d5db4a31fe779ef07bb1af4ea92da5c786622
AISHELL-3 is a large-scale and high-fidelity multi-speaker Mandarin speech corpus published by Beijing Shell Shell Technology Co.,Ltd. It can be used to train multi-speaker Text-to-Speech (TTS) systems.The corpus contains roughly 85 hours of emotion-neutral recordings spoken by 218 native Chinese mandarin speakers and total 88035 utterances. Their auxiliary attributes such as gender, age group and native accents are explicitly marked and provided in the corpus. Accordingly, transcripts in Chinese character-level and pinyin-level are provided along with the recordings. The word & tone transcription accuracy rate is above 98%, through professional speech annotation and strict quality inspection for tone and prosody. You can cite the data using the following BibTeX entry: @inproceedings{AISHELL-3_2020,\ title={AISHELL-3: A Multi-speaker Mandarin TTS Corpus and the Baselines},\ author={Yao Shi, Hui Bu, Xin Xu, Shaoji Zhang, Ming Li},\ year={2015},\ url={https://arxiv.org/abs/2010.11567}\ } The baseline system code and generated samples are available here External URL: http://www.aishelltech.com/aishell_3 Full description from the company website.
AISHELL/AISHELL-3
[ "task_categories:text-to-speech", "size_categories:10K<n<100K", "language:zh", "license:apache-2.0", "arxiv:2010.11567", "region:us" ]
2024-01-08T07:41:49+00:00
{"language": ["zh"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-to-speech"]}
2024-01-08T09:56:11+00:00
[ "2010.11567" ]
[ "zh" ]
TAGS #task_categories-text-to-speech #size_categories-10K<n<100K #language-Chinese #license-apache-2.0 #arxiv-2010.11567 #region-us
AISHELL-3 is a large-scale and high-fidelity multi-speaker Mandarin speech corpus published by Beijing Shell Shell Technology Co.,Ltd. It can be used to train multi-speaker Text-to-Speech (TTS) systems.The corpus contains roughly 85 hours of emotion-neutral recordings spoken by 218 native Chinese mandarin speakers and total 88035 utterances. Their auxiliary attributes such as gender, age group and native accents are explicitly marked and provided in the corpus. Accordingly, transcripts in Chinese character-level and pinyin-level are provided along with the recordings. The word & tone transcription accuracy rate is above 98%, through professional speech annotation and strict quality inspection for tone and prosody. You can cite the data using the following BibTeX entry: @inproceedings{AISHELL-3_2020,\ title={AISHELL-3: A Multi-speaker Mandarin TTS Corpus and the Baselines},\ author={Yao Shi, Hui Bu, Xin Xu, Shaoji Zhang, Ming Li},\ year={2015},\ url={URL } The baseline system code and generated samples are available here External URL: URL Full description from the company website.
[]
[ "TAGS\n#task_categories-text-to-speech #size_categories-10K<n<100K #language-Chinese #license-apache-2.0 #arxiv-2010.11567 #region-us \n" ]
[ 52 ]
[ "passage: TAGS\n#task_categories-text-to-speech #size_categories-10K<n<100K #language-Chinese #license-apache-2.0 #arxiv-2010.11567 #region-us \n" ]
ce5aeeff82e10f14d4160e591866db6ee5e8e410
# Dataset Card for WEBNLG-QA ## Dataset Description - **Paper:** [SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications (AACL-IJCNLP 2022)](https://aclanthology.org/2022.aacl-main.11/) - **Point of Contact:** Gwénolé Lecorvé ### Dataset Summary WEBNLG-QA is a conversational question answering dataset grounded on WEBNLG. It consists in a set of question-answering dialogues (follow-up question-answer pairs) based on short paragraphs of text. Each paragraph is associated a knowledge graph (from WEBNLG). The questions are associated with SPARQL queries. ### Supported tasks * Knowledge-based question-answering * SPARQL-to-Text conversion #### Knowledge based question-answering Below is an example of dialogue: - Q1: What is used as an instrument is Sludge Metal or in Post-metal? - A1: Singing, Synthesizer - Q2: And what about Sludge Metal in particular? - A2: Singing - Q3: Does the Year of No Light album Nord belong to this genre? - A3: Yes. #### SPARQL-to-Text Question Generation SPARQL-to-Text question generation refers to the task of converting a SPARQL query into a natural language question, eg: ```SQL SELECT (COUNT(?country) as ?answer) WHERE { ?country property:member_of resource:Europe . ?country property:population ?n . FILTER ( ?n > 10000000 ) } ``` could be converted into: ```txt How many European countries have more than 10 million inhabitants? ``` ## Dataset Structure ### Types of questions Comparison of question types compared to related datasets: | | | [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) | [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) | [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) | [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) | [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) | |--------------------------|-----------------|:---------------:|:------:|:-----------:|:----:|:---------:| | **Number of triplets in query** | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | 2 | | ✓ | ✓ | ✓ | ✓ | | | More | | | ✓ | ✓ | ✓ | | **Logical connector between triplets** | Conjunction | ✓ | ✓ | ✓ | ✓ | ✓ | | | Disjunction | | | | ✓ | ✓ | | | Exclusion | | | | ✓ | ✓ | | **Topology of the query graph** | Direct | ✓ | ✓ | ✓ | ✓ | ✓ | | | Sibling | | ✓ | ✓ | ✓ | ✓ | | | Chain | | ✓ | ✓ | ✓ | ✓ | | | Mixed | | | ✓ | | ✓ | | | Other | | ✓ | ✓ | ✓ | ✓ | | **Variable typing in the query** | None | ✓ | ✓ | ✓ | ✓ | ✓ | | | Target variable | | ✓ | ✓ | ✓ | ✓ | | | Internal variable | | ✓ | ✓ | ✓ | ✓ | | **Comparisons clauses** | None | ✓ | ✓ | ✓ | ✓ | ✓ | | | String | | | ✓ | | ✓ | | | Number | | | ✓ | ✓ | ✓ | | | Date | | | ✓ | | ✓ | | **Superlative clauses** | No | ✓ | ✓ | ✓ | ✓ | ✓ | | | Yes | | | | ✓ | | | **Answer type** | Entity (open) | ✓ | ✓ | ✓ | ✓ | ✓ | | | Entity (closed) | | | | ✓ | ✓ | | | Number | | | ✓ | ✓ | ✓ | | | Boolean | | ✓ | ✓ | ✓ | ✓ | | **Answer cardinality** | 0 (unanswerable) | | | ✓ | | ✓ | | | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | More | | ✓ | ✓ | ✓ | ✓ | | **Number of target variables** | 0 (⇒ ASK verb) | | ✓ | ✓ | ✓ | ✓ | | | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | 2 | | | ✓ | | ✓ | | **Dialogue context** | Self-sufficient | ✓ | ✓ | ✓ | ✓ | ✓ | | | Coreference | | | | ✓ | ✓ | | | Ellipsis | | | | ✓ | ✓ | | **Meaning** | Meaningful | ✓ | ✓ | ✓ | ✓ | ✓ | | | Non-sense | | | | | ✓ | ### Data splits Text verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries. | | Train | Validation | Test | Challenge | | --------------------- | ---------- | ---------- | ---------- | ------------ | | Questions | 27727 | 3485 | 4179 | 332 | | Dialogues | 1001 | 1264 | 1417 | 100 | | NL question per query | 0 | 0 | 0 | 2 | | Characters per query | 129 (± 43) | 131 (± 45) | 122 (± 45) | 113 (± 38) | | Tokens per question | - | - | - | 8.4 (± 4.5) | ## Additional information ### Related datasets This corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely: - Non conversational datasets - [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) (from https://github.com/askplatypus/wikidata-simplequestions) - [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) (from https://github.com/barshana-banerjee/ParaQA) - [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) (from http://lc-quad.sda.tech/) - Conversational datasets - [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) (from https://amritasaha1812.github.io/CSQA/) - [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) (derived from https://gitlab.com/shimorina/webnlg-dataset/-/tree/master/release_v3.0) ### Licencing information * Content from original dataset: CC-BY-SA 4.0 * New content: CC BY-SA 4.0 ### Citation information #### This dataset ```bibtex @inproceedings{lecorve2022sparql2text, title={SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications}, author={Lecorv\'e, Gw\'enol\'e and Veyret, Morgan and Brabant, Quentin and Rojas-Barahona, Lina M.}, journal={Proceedings of the Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP)}, year={2022} } ``` #### The underlying corpus WEBNLG 3.0 ```bibtex @inproceedings{castro-ferreira-etal-2020-2020, title = "The 2020 Bilingual, Bi-Directional {W}eb{NLG}+ Shared Task: Overview and Evaluation Results ({W}eb{NLG}+ 2020)", author = "Castro Ferreira, Thiago and Gardent, Claire and Ilinykh, Nikolai and van der Lee, Chris and Mille, Simon and Moussallem, Diego and Shimorina, Anastasia", booktitle = "Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)", year = "2020", pages = "55--76" } ```
OrangeInnov/webnlg-qa
[ "task_categories:conversational", "task_categories:question-answering", "task_categories:text-generation", "language:en", "license:cc-by-sa-4.0", "qa", "knowledge-graph", "sparql", "region:us" ]
2024-01-08T08:35:26+00:00
{"language": ["en"], "license": "cc-by-sa-4.0", "task_categories": ["conversational", "question-answering", "text-generation"], "dataset_info": {"features": [{"name": "category", "dtype": "string"}, {"name": "size", "dtype": "int32"}, {"name": "id", "dtype": "string"}, {"name": "eid", "dtype": "string"}, {"name": "original_triple_sets", "list": [{"name": "subject", "dtype": "string"}, {"name": "property", "dtype": "string"}, {"name": "object", "dtype": "string"}]}, {"name": "modified_triple_sets", "list": [{"name": "subject", "dtype": "string"}, {"name": "property", "dtype": "string"}, {"name": "object", "dtype": "string"}]}, {"name": "shape", "dtype": "string"}, {"name": "shape_type", "dtype": "string"}, {"name": "lex", "sequence": [{"name": "comment", "dtype": "string"}, {"name": "lid", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "lang", "dtype": "string"}]}, {"name": "test_category", "dtype": "string"}, {"name": "dbpedia_links", "sequence": "string"}, {"name": "links", "sequence": "string"}, {"name": "graph", "list": {"list": "string"}}, {"name": "main_entity", "dtype": "string"}, {"name": "mappings", "list": [{"name": "modified", "dtype": "string"}, {"name": "readable", "dtype": "string"}, {"name": "graph", "dtype": "string"}]}, {"name": "dialogue", "list": [{"name": "question", "list": [{"name": "source", "dtype": "string"}, {"name": "text", "dtype": "string"}]}, {"name": "graph_query", "dtype": "string"}, {"name": "readable_query", "dtype": "string"}, {"name": "graph_answer", "list": "string"}, {"name": "readable_answer", "list": "string"}, {"name": "type", "list": "string"}]}], "splits": [{"name": "train", "num_bytes": 33200723, "num_examples": 10016}, {"name": "validation", "num_bytes": 4196972, "num_examples": 1264}, {"name": "test", "num_bytes": 4990595, "num_examples": 1417}, {"name": "challenge", "num_bytes": 420551, "num_examples": 100}], "download_size": 9637685, "dataset_size": 42808841}, "tags": ["qa", "knowledge-graph", "sparql"]}
2024-01-11T13:19:10+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #task_categories-question-answering #task_categories-text-generation #language-English #license-cc-by-sa-4.0 #qa #knowledge-graph #sparql #region-us
Dataset Card for WEBNLG-QA ========================== Dataset Description ------------------- * Paper: SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications (AACL-IJCNLP 2022) * Point of Contact: Gwénolé Lecorvé ### Dataset Summary WEBNLG-QA is a conversational question answering dataset grounded on WEBNLG. It consists in a set of question-answering dialogues (follow-up question-answer pairs) based on short paragraphs of text. Each paragraph is associated a knowledge graph (from WEBNLG). The questions are associated with SPARQL queries. ### Supported tasks * Knowledge-based question-answering * SPARQL-to-Text conversion #### Knowledge based question-answering Below is an example of dialogue: * Q1: What is used as an instrument is Sludge Metal or in Post-metal? * A1: Singing, Synthesizer * Q2: And what about Sludge Metal in particular? * A2: Singing * Q3: Does the Year of No Light album Nord belong to this genre? * A3: Yes. #### SPARQL-to-Text Question Generation SPARQL-to-Text question generation refers to the task of converting a SPARQL query into a natural language question, eg: could be converted into: Dataset Structure ----------------- ### Types of questions Comparison of question types compared to related datasets: ### Data splits Text verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries. Additional information ---------------------- ### Related datasets This corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely: * Non conversational datasets + SimpleQuestions (from URL + ParaQA (from URL + LC-QuAD 2.0 (from URL * Conversational datasets + CSQA (from URL + WebNLQ-QA (derived from URL ### Licencing information * Content from original dataset: CC-BY-SA 4.0 * New content: CC BY-SA 4.0 information #### This dataset #### The underlying corpus WEBNLG 3.0
[ "### Dataset Summary\n\n\nWEBNLG-QA is a conversational question answering dataset grounded on WEBNLG. It consists in a set of question-answering dialogues (follow-up question-answer pairs) based on short paragraphs of text. Each paragraph is associated a knowledge graph (from WEBNLG). The questions are associated with SPARQL queries.", "### Supported tasks\n\n\n* Knowledge-based question-answering\n* SPARQL-to-Text conversion", "#### Knowledge based question-answering\n\n\nBelow is an example of dialogue:\n\n\n* Q1: What is used as an instrument is Sludge Metal or in Post-metal?\n* A1: Singing, Synthesizer\n* Q2: And what about Sludge Metal in particular?\n* A2: Singing\n* Q3: Does the Year of No Light album Nord belong to this genre?\n* A3: Yes.", "#### SPARQL-to-Text Question Generation\n\n\nSPARQL-to-Text question generation refers to the task of converting a SPARQL query into a natural language question, eg:\n\n\ncould be converted into:\n\n\nDataset Structure\n-----------------", "### Types of questions\n\n\nComparison of question types compared to related datasets:", "### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.\n\n\n\nAdditional information\n----------------------", "### Related datasets\n\n\nThis corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely:\n\n\n* Non conversational datasets\n\t+ SimpleQuestions (from URL\n\t+ ParaQA (from URL\n\t+ LC-QuAD 2.0 (from URL\n* Conversational datasets\n\t+ CSQA (from URL\n\t+ WebNLQ-QA (derived from URL", "### Licencing information\n\n\n* Content from original dataset: CC-BY-SA 4.0\n* New content: CC BY-SA 4.0\n\n\ninformation", "#### This dataset", "#### The underlying corpus WEBNLG 3.0" ]
[ "TAGS\n#task_categories-conversational #task_categories-question-answering #task_categories-text-generation #language-English #license-cc-by-sa-4.0 #qa #knowledge-graph #sparql #region-us \n", "### Dataset Summary\n\n\nWEBNLG-QA is a conversational question answering dataset grounded on WEBNLG. It consists in a set of question-answering dialogues (follow-up question-answer pairs) based on short paragraphs of text. Each paragraph is associated a knowledge graph (from WEBNLG). The questions are associated with SPARQL queries.", "### Supported tasks\n\n\n* Knowledge-based question-answering\n* SPARQL-to-Text conversion", "#### Knowledge based question-answering\n\n\nBelow is an example of dialogue:\n\n\n* Q1: What is used as an instrument is Sludge Metal or in Post-metal?\n* A1: Singing, Synthesizer\n* Q2: And what about Sludge Metal in particular?\n* A2: Singing\n* Q3: Does the Year of No Light album Nord belong to this genre?\n* A3: Yes.", "#### SPARQL-to-Text Question Generation\n\n\nSPARQL-to-Text question generation refers to the task of converting a SPARQL query into a natural language question, eg:\n\n\ncould be converted into:\n\n\nDataset Structure\n-----------------", "### Types of questions\n\n\nComparison of question types compared to related datasets:", "### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.\n\n\n\nAdditional information\n----------------------", "### Related datasets\n\n\nThis corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely:\n\n\n* Non conversational datasets\n\t+ SimpleQuestions (from URL\n\t+ ParaQA (from URL\n\t+ LC-QuAD 2.0 (from URL\n* Conversational datasets\n\t+ CSQA (from URL\n\t+ WebNLQ-QA (derived from URL", "### Licencing information\n\n\n* Content from original dataset: CC-BY-SA 4.0\n* New content: CC BY-SA 4.0\n\n\ninformation", "#### This dataset", "#### The underlying corpus WEBNLG 3.0" ]
[ 64, 85, 24, 88, 58, 18, 56, 88, 28, 5, 11 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-question-answering #task_categories-text-generation #language-English #license-cc-by-sa-4.0 #qa #knowledge-graph #sparql #region-us \n### Dataset Summary\n\n\nWEBNLG-QA is a conversational question answering dataset grounded on WEBNLG. It consists in a set of question-answering dialogues (follow-up question-answer pairs) based on short paragraphs of text. Each paragraph is associated a knowledge graph (from WEBNLG). The questions are associated with SPARQL queries.### Supported tasks\n\n\n* Knowledge-based question-answering\n* SPARQL-to-Text conversion#### Knowledge based question-answering\n\n\nBelow is an example of dialogue:\n\n\n* Q1: What is used as an instrument is Sludge Metal or in Post-metal?\n* A1: Singing, Synthesizer\n* Q2: And what about Sludge Metal in particular?\n* A2: Singing\n* Q3: Does the Year of No Light album Nord belong to this genre?\n* A3: Yes.#### SPARQL-to-Text Question Generation\n\n\nSPARQL-to-Text question generation refers to the task of converting a SPARQL query into a natural language question, eg:\n\n\ncould be converted into:\n\n\nDataset Structure\n-----------------### Types of questions\n\n\nComparison of question types compared to related datasets:### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.\n\n\n\nAdditional information\n----------------------### Related datasets\n\n\nThis corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely:\n\n\n* Non conversational datasets\n\t+ SimpleQuestions (from URL\n\t+ ParaQA (from URL\n\t+ LC-QuAD 2.0 (from URL\n* Conversational datasets\n\t+ CSQA (from URL\n\t+ WebNLQ-QA (derived from URL" ]
03ae44b46cc85785201e625620a4b66bffe0f13a
<div align="center"> <img width="640" alt="senthilsk/crack_detection_dataset" src="https://huggingface.co/datasets/senthilsk/crack_detection_dataset/resolve/main/thumbnail.jpg"> </div> ### Dataset Labels ``` ['crack', 'mold', 'peeling_paint', 'stairstep_crack', 'water_seepage'] ``` ### Number of Images ```json {'valid': 462, 'test': 225, 'train': 2263} ``` ### How to Use - Install [datasets](https://pypi.org/project/datasets/): ```bash pip install datasets ``` - Load the dataset: ```python from datasets import load_dataset ds = load_dataset("senthilsk/crack_detection_dataset", name="full") example = ds['train'][0] ``` ### Roboflow Dataset Page [https://universe.roboflow.com/objectdetection-qxiqx/detr_crack_dataset/dataset/1](https://universe.roboflow.com/objectdetection-qxiqx/detr_crack_dataset/dataset/1?ref=roboflow2huggingface) ### Citation ``` @misc{ detr_crack_dataset_dataset, title = { detr_crack_dataset Dataset }, type = { Open Source Dataset }, author = { objectdetection }, howpublished = { \\url{ https://universe.roboflow.com/objectdetection-qxiqx/detr_crack_dataset } }, url = { https://universe.roboflow.com/objectdetection-qxiqx/detr_crack_dataset }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2024 }, month = { jan }, note = { visited on 2024-01-09 }, } ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.com on January 9, 2024 at 4:01 AM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand and search unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time For state of the art Computer Vision training notebooks you can use with this dataset, visit https://github.com/roboflow/notebooks To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com The dataset includes 2950 images. Cracks-AX10-cracks are annotated in COCO format. The following pre-processing was applied to each image: No image augmentation techniques were applied.
senthilsk/crack_detection_dataset
[ "task_categories:object-detection", "roboflow", "roboflow2huggingface", "region:us" ]
2024-01-08T08:48:31+00:00
{"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface"]}
2024-01-09T04:02:09+00:00
[]
[]
TAGS #task_categories-object-detection #roboflow #roboflow2huggingface #region-us
<div align="center"> <img width="640" alt="senthilsk/crack_detection_dataset" src="URL </div> ### Dataset Labels ### Number of Images ### How to Use - Install datasets: - Load the dataset: ### Roboflow Dataset Page URL ### License CC BY 4.0 ### Dataset Summary This dataset was exported via URL on January 9, 2024 at 4:01 AM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand and search unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time For state of the art Computer Vision training notebooks you can use with this dataset, visit URL To find over 100k other datasets and pre-trained models, visit URL The dataset includes 2950 images. Cracks-AX10-cracks are annotated in COCO format. The following pre-processing was applied to each image: No image augmentation techniques were applied.
[ "### Dataset Labels", "### Number of Images", "### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:", "### Roboflow Dataset Page\nURL", "### License\nCC BY 4.0", "### Dataset Summary\nThis dataset was exported via URL on January 9, 2024 at 4:01 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 2950 images.\nCracks-AX10-cracks are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n\nNo image augmentation techniques were applied." ]
[ "TAGS\n#task_categories-object-detection #roboflow #roboflow2huggingface #region-us \n", "### Dataset Labels", "### Number of Images", "### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:", "### Roboflow Dataset Page\nURL", "### License\nCC BY 4.0", "### Dataset Summary\nThis dataset was exported via URL on January 9, 2024 at 4:01 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 2950 images.\nCracks-AX10-cracks are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n\nNo image augmentation techniques were applied." ]
[ 27, 5, 5, 18, 8, 6, 180 ]
[ "passage: TAGS\n#task_categories-object-detection #roboflow #roboflow2huggingface #region-us \n### Dataset Labels### Number of Images### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:### Roboflow Dataset Page\nURL### License\nCC BY 4.0### Dataset Summary\nThis dataset was exported via URL on January 9, 2024 at 4:01 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 2950 images.\nCracks-AX10-cracks are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n\nNo image augmentation techniques were applied." ]
fa74aeb35ba1d4c829c0601d64fb084f51d1ca47
# Dataset Card for "iemocap_synth" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Codec-SUPERB/iemocap_synth
[ "region:us" ]
2024-01-08T09:23:57+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "original", "path": "data/original-*"}, {"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k_12bps", "path": "data/encodec_24k_12bps-*"}, {"split": "encodec_24k_1_5bps", "path": "data/encodec_24k_1_5bps-*"}, {"split": "encodec_24k_24bps", "path": "data/encodec_24k_24bps-*"}, {"split": "encodec_24k_3bps", "path": "data/encodec_24k_3bps-*"}, {"split": "encodec_24k_6bps", "path": "data/encodec_24k_6bps-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "id", "dtype": "string"}], "splits": [{"name": "original", "num_bytes": 805534560.953, "num_examples": 5531}, {"name": "academicodec_hifi_16k_320d", "num_bytes": 803935882.953, "num_examples": 5531}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 803935882.953, "num_examples": 5531}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 1206479242.953, "num_examples": 5531}, {"name": "audiodec_24k_320d", "num_bytes": 1209758482.953, "num_examples": 5531}, {"name": "dac_16k", "num_bytes": 805722614.953, "num_examples": 5531}, {"name": "dac_24k", "num_bytes": 1208292384.953, "num_examples": 5531}, {"name": "dac_44k", "num_bytes": 2219743798.953, "num_examples": 5531}, {"name": "encodec_24k_12bps", "num_bytes": 1208292384.953, "num_examples": 5531}, {"name": "encodec_24k_1_5bps", "num_bytes": 1208292384.953, "num_examples": 5531}, {"name": "encodec_24k_24bps", "num_bytes": 1208292384.953, "num_examples": 5531}, {"name": "encodec_24k_3bps", "num_bytes": 1208292384.953, "num_examples": 5531}, {"name": "encodec_24k_6bps", "num_bytes": 1208292384.953, "num_examples": 5531}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 805275064.953, "num_examples": 5531}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 805275064.953, "num_examples": 5531}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 805722614.953, "num_examples": 5531}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 805722614.953, "num_examples": 5531}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 805722614.953, "num_examples": 5531}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 805722614.953, "num_examples": 5531}, {"name": "speech_tokenizer_16k", "num_bytes": 807018762.953, "num_examples": 5531}], "download_size": 20057832574, "dataset_size": 20745324129.05999}}
2024-01-29T01:46:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "iemocap_synth" More Information needed
[ "# Dataset Card for \"iemocap_synth\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"iemocap_synth\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"iemocap_synth\"\n\nMore Information needed" ]
bbc847e86c604e1505387c3871d48822c9da6be5
[riddle_sense](https://huggingface.co/datasets/riddle_sense) dataset formatted into an alpaca format dataset for instruction tuning LLMs for reasoning capabilities.
Technoculture/riddle_sense
[ "task_categories:question-answering", "size_categories:1K<n<10K", "language:en", "license:mit", "reasoning", "region:us" ]
2024-01-08T09:25:28+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"], "pretty_name": "Riddle Sen", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 829501, "num_examples": 3510}, {"name": "validation", "num_bytes": 239903, "num_examples": 1021}, {"name": "test", "num_bytes": 249470, "num_examples": 1184}], "download_size": 651507, "dataset_size": 1318874}, "tags": ["reasoning"]}
2024-01-08T10:04:21+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #size_categories-1K<n<10K #language-English #license-mit #reasoning #region-us
riddle_sense dataset formatted into an alpaca format dataset for instruction tuning LLMs for reasoning capabilities.
[]
[ "TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-English #license-mit #reasoning #region-us \n" ]
[ 43 ]
[ "passage: TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-English #license-mit #reasoning #region-us \n" ]
5331c95bbd955f118d3bce0b3badacc9c6132910
# Dataset Card for "iemocap_extract_unit" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Codec-SUPERB/iemocap_extract_unit
[ "region:us" ]
2024-01-08T09:37:20+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k", "path": "data/encodec_24k-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "unit", "sequence": {"sequence": "int64"}}], "splits": [{"name": "academicodec_hifi_16k_320d", "num_bytes": 40413131, "num_examples": 5531}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 40413131, "num_examples": 5531}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 60540299, "num_examples": 5531}, {"name": "audiodec_24k_320d", "num_bytes": 129312379, "num_examples": 5531}, {"name": "dac_16k", "num_bytes": 255078987, "num_examples": 5531}, {"name": "dac_24k", "num_bytes": 725740763, "num_examples": 5531}, {"name": "dac_44k", "num_bytes": 216229959, "num_examples": 5531}, {"name": "encodec_24k", "num_bytes": 30440899, "num_examples": 5531}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 323620059, "num_examples": 5531}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 323620059, "num_examples": 5531}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 323437275, "num_examples": 5531}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 162512347, "num_examples": 5531}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 323437275, "num_examples": 5531}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 323437275, "num_examples": 5531}, {"name": "speech_tokenizer_16k", "num_bytes": 80977275, "num_examples": 5531}], "download_size": 512373778, "dataset_size": 3359211113}}
2024-01-08T10:00:37+00:00
[]
[]
TAGS #region-us
# Dataset Card for "iemocap_extract_unit" More Information needed
[ "# Dataset Card for \"iemocap_extract_unit\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"iemocap_extract_unit\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"iemocap_extract_unit\"\n\nMore Information needed" ]
72d15fdd245839652aa30a5f8717b3b79f106c2a
Synthetic dataset with question/answers couples extracted from [A-Roucher/huggingface_doc](https://huggingface.co/datasets/A-Roucher/huggingface_doc): use it with this dataset to evaluate your RAG systems! ⭐️⭐️⭐️
m-ric/huggingface_doc_qa_eval
[ "license:apache-2.0", "region:us" ]
2024-01-08T09:41:35+00:00
{"license": "apache-2.0", "dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "source_doc", "dtype": "string"}, {"name": "standalone_score", "dtype": "int64"}, {"name": "standalone_eval", "dtype": "string"}, {"name": "relatedness_score", "dtype": "int64"}, {"name": "relatedness_eval", "dtype": "string"}, {"name": "relevance_score", "dtype": "int64"}, {"name": "relevance_eval", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 611615.7037037037, "num_examples": 67}], "download_size": 296501, "dataset_size": 611615.7037037037}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-18T15:23:41+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
Synthetic dataset with question/answers couples extracted from A-Roucher/huggingface_doc: use it with this dataset to evaluate your RAG systems! ⭐️⭐️⭐️
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
faac05b8c3a9e6fffb1af8ac1107ea6946959fc4
### Hola: Multilingual-Text-Generation-(LLM) and Language-Classification-Public-Dataset Release Date: 2/12/2024 (Myanmar Union Day) ### <b> Overview </b> Hola dataset contains data from 11 languages, including English, Burmese, Japanese, Spanish, Chinese (Traditional), Korean, Mon, Paoh, etc. And the data was crawled from Wikipedia. Each sample is a sentence data with a label of ISO 639-1 code of respective language, e.g. en, my, ja, es, zh, ko, etc. ### <b> Data </b> Each training sample and testing sample is a sentence data like this: <table> <tr> <th> text </th> </tr> <tr> <td> Human evolution is the evolutionary process within the history of primates that led to the emergence of Homo sapiens as a distinct species of the hominid family, which includes all the great apes </td> </tr> <tr> <td> ဇီဝဗေဒ ပညာတွင် ဆင့်ကဲပြောင်းလဲမှုဖြစ်စဉ် သို့မဟုတ် အီဗော်လူးရှင်း ဆိုသည်မှာ သက်ရှိအစုအဝေးများတွင် တွေ့ရသော မျိုးရိုးလိုက်သည့် လက္ခဏာရပ်များ အချိန်နှင့် အမျှ ပြောင်းလဲသွားခြင်းကို ခေါ်ဆိုခြင်း ဖြစ်သည် </td> </tr> <tr> <td> 人類の進化(じんるいのしんか、英: human evolution)とは、他と異なる独立した生物種としての現生人類(英名:anatomically modern humans〈意:解剖学的現代人〉、学名:Homo sapiens〈ホモ・サピエンス〉、他説では Homo sapiens sapiens )を主題としながら、これが属する系統群(クレード)全体を見渡そうとする概念における、当該系統群が辿ってきた生物学的進化をいう </td> </tr> <tr> <td> La evolución humana u hominización es el proceso de evolución biológica de la especie humana desde sus ancestros hasta la actualidad </td> </tr> <tr> <td> 人類演化过去一直指的是在旧的解剖学意义上的,根據演化學說,所做出的各種關於现代人出现的假设 </td> </tr> <tr> <td> 인류의 진화는 사람이 하나의 구분된 종으로 나타나게 되는 과정과 발전 또는 진화 과정이다 </td> </tr> </table> Each training sample and testing sample is labeled with one of the following labels: <table> <tr> <th> Label </th> <th> Description </th> </tr> <tr> <td> en </td> <td> English </td> </tr> <tr> <td> my </td> <td> Myanmar </td> </tr> <tr> <td> ja </td> <td> Japan </td> </tr> <tr> <td> es </td> <td> Spanish </td> </tr> <tr> <td> zh </td> <td> Simplified Chinese </td> </tr> <tr> <td> ko </td> <td> Korean </td> </tr> <tr> <td> si </td> <td> Sinhalese </td> </tr> <tr> <td> ta </td> <td> Tamil </td> </tr> <tr> <td> sw </td> <td> Swahili </td> </tr> <tr> <td> mnw </td> <td> Mon </td> </tr> <tr> <td> blk </td> <td> Paoh </td> </tr> </table> ### How to load the dataset: ``` from datasets import load_dataset dataset = load_dataset("simbolo-ai/hola") ``` ### Contributors: Main Contributor: [Sa Phyo Thu Htet](https://github.com/SaPhyoThuHtet) Other Contributors: Kaung Khant Ko Ko, Phuu Pwint Thinzar Kyaing, Yin Htwe ### How to Cite: ```bibtex @misc{wiki-burmese-sentences, author = {{Sa Phyo Thu Htet}}, title = {hola}, url = {https://huggingface.co/datasets/simbolo-ai/hola}, urldate = {2024-2-12}, date = {2024-2-12} } ```
simbolo-ai/hola
[ "language:my", "license:gpl", "region:us" ]
2024-01-08T10:23:26+00:00
{"language": ["my"], "license": "gpl"}
2024-02-12T06:10:58+00:00
[]
[ "my" ]
TAGS #language-Burmese #license-gpl #region-us
### Hola: Multilingual-Text-Generation-(LLM) and Language-Classification-Public-Dataset Release Date: 2/12/2024 (Myanmar Union Day) ### **Overview** Hola dataset contains data from 11 languages, including English, Burmese, Japanese, Spanish, Chinese (Traditional), Korean, Mon, Paoh, etc. And the data was crawled from Wikipedia. Each sample is a sentence data with a label of ISO 639-1 code of respective language, e.g. en, my, ja, es, zh, ko, etc. ### **Data** Each training sample and testing sample is a sentence data like this: Each training sample and testing sample is labeled with one of the following labels: ### How to load the dataset: ### Contributors: Main Contributor: Sa Phyo Thu Htet Other Contributors: Kaung Khant Ko Ko, Phuu Pwint Thinzar Kyaing, Yin Htwe ### How to Cite:
[ "### Hola: Multilingual-Text-Generation-(LLM) and Language-Classification-Public-Dataset\n\n\nRelease Date: 2/12/2024 (Myanmar Union Day)", "### **Overview**\n\n\nHola dataset contains data from 11 languages, including English, Burmese, Japanese, Spanish, Chinese (Traditional), Korean, Mon, Paoh, etc. And the data was crawled from Wikipedia. Each sample is a sentence data with a label of ISO 639-1 code of respective language, e.g. en, my, ja, es, zh, ko, etc.", "### **Data**\n\n\nEach training sample and testing sample is a sentence data like this:\n\n\n\nEach training sample and testing sample is labeled with one of the following labels:", "### How to load the dataset:", "### Contributors:\n\n\nMain Contributor: Sa Phyo Thu Htet\n\n\nOther Contributors: Kaung Khant Ko Ko, Phuu Pwint Thinzar Kyaing, Yin Htwe", "### How to Cite:" ]
[ "TAGS\n#language-Burmese #license-gpl #region-us \n", "### Hola: Multilingual-Text-Generation-(LLM) and Language-Classification-Public-Dataset\n\n\nRelease Date: 2/12/2024 (Myanmar Union Day)", "### **Overview**\n\n\nHola dataset contains data from 11 languages, including English, Burmese, Japanese, Spanish, Chinese (Traditional), Korean, Mon, Paoh, etc. And the data was crawled from Wikipedia. Each sample is a sentence data with a label of ISO 639-1 code of respective language, e.g. en, my, ja, es, zh, ko, etc.", "### **Data**\n\n\nEach training sample and testing sample is a sentence data like this:\n\n\n\nEach training sample and testing sample is labeled with one of the following labels:", "### How to load the dataset:", "### Contributors:\n\n\nMain Contributor: Sa Phyo Thu Htet\n\n\nOther Contributors: Kaung Khant Ko Ko, Phuu Pwint Thinzar Kyaing, Yin Htwe", "### How to Cite:" ]
[ 16, 39, 88, 35, 9, 45, 7 ]
[ "passage: TAGS\n#language-Burmese #license-gpl #region-us \n### Hola: Multilingual-Text-Generation-(LLM) and Language-Classification-Public-Dataset\n\n\nRelease Date: 2/12/2024 (Myanmar Union Day)### **Overview**\n\n\nHola dataset contains data from 11 languages, including English, Burmese, Japanese, Spanish, Chinese (Traditional), Korean, Mon, Paoh, etc. And the data was crawled from Wikipedia. Each sample is a sentence data with a label of ISO 639-1 code of respective language, e.g. en, my, ja, es, zh, ko, etc.### **Data**\n\n\nEach training sample and testing sample is a sentence data like this:\n\n\n\nEach training sample and testing sample is labeled with one of the following labels:### How to load the dataset:### Contributors:\n\n\nMain Contributor: Sa Phyo Thu Htet\n\n\nOther Contributors: Kaung Khant Ko Ko, Phuu Pwint Thinzar Kyaing, Yin Htwe### How to Cite:" ]
7ef74d0624efc36c16fe3b82aa5438f7c7260706
YFCC15m dataset from <https://github.com/openai/CLIP/blob/main/data/yfcc100m.md>. The subset is obtained by filtering the original YFCC100m (yfcc100m_dataset.sql) using the photo ids from <https://github.com/openai/CLIP/blob/main/data/yfcc100m.md>. The script to rebuild the data from the original YFCC100m is provided at [build_yfcc15m.py](build_yfcc15m.py).
mehdidc/yfcc15m
[ "license:cc", "region:us" ]
2024-01-08T10:35:53+00:00
{"license": "cc"}
2024-01-08T10:59:49+00:00
[]
[]
TAGS #license-cc #region-us
YFCC15m dataset from <URL The subset is obtained by filtering the original YFCC100m (yfcc100m_dataset.sql) using the photo ids from <URL The script to rebuild the data from the original YFCC100m is provided at build_yfcc15m.py.
[]
[ "TAGS\n#license-cc #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-cc #region-us \n" ]
58ca475cc6bc01e4d55b89ad9dcbc8d7edfd63d5
- 37.870 texts in total, 17.850 NOT clickbait texts and 20.020 CLICKBAIT texts - All duplicate values were removed - Split using sklearn into 80% train and 20% temporary test (stratified label). Then split the test set using 0.50% test and validation (stratified label) - Split: 80/10/10 - Train set label distribution: 0 ==> 14.280, 1 ==> 16.016 - Validation set label distribution: 0 ==> 1.785, 1 ==> 2.002 - Test set label distribution: 0 ==> 1.785, 1 ==> 2.002 - The dataset was created from the combination of other available datasets online. Their links are available here: - https://www.kaggle.com/datasets/amananandrai/clickbait-dataset - https://www.kaggle.com/datasets/thelazyaz/youtube-clickbait-classification?resource=download - https://www.kaggle.com/datasets/vikassingh1996/news-clickbait-dataset?select=train2.csv - https://www.kaggle.com/competitions/clickbait-news-detection/data?select=train.csv - https://www.kaggle.com/competitions/clickbait-news-detection/data?select=valid.csv - https://zenodo.org/records/6362726#.YsbdSTVBzrk
christinacdl/clickbait_detection_dataset
[ "task_categories:text-classification", "language:en", "license:apache-2.0", "clickbait", "not", "binary_classification", "region:us" ]
2024-01-08T10:39:08+00:00
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-classification"], "tags": ["clickbait", "not", "binary_classification"]}
2024-01-08T10:43:56+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #language-English #license-apache-2.0 #clickbait #not #binary_classification #region-us
- 37.870 texts in total, 17.850 NOT clickbait texts and 20.020 CLICKBAIT texts - All duplicate values were removed - Split using sklearn into 80% train and 20% temporary test (stratified label). Then split the test set using 0.50% test and validation (stratified label) - Split: 80/10/10 - Train set label distribution: 0 ==> 14.280, 1 ==> 16.016 - Validation set label distribution: 0 ==> 1.785, 1 ==> 2.002 - Test set label distribution: 0 ==> 1.785, 1 ==> 2.002 - The dataset was created from the combination of other available datasets online. Their links are available here: - URL - URL - URL - URL - URL - URL
[]
[ "TAGS\n#task_categories-text-classification #language-English #license-apache-2.0 #clickbait #not #binary_classification #region-us \n" ]
[ 40 ]
[ "passage: TAGS\n#task_categories-text-classification #language-English #license-apache-2.0 #clickbait #not #binary_classification #region-us \n" ]
3d3b905d69dcda64923956fb908dfc11d2011cd7
# DynaBreastLite A lightweight dynamic 4D human breast anthropometric dataset.
liu-qilong/dyna-breast-lite
[ "license:cc-by-nc-sa-4.0", "region:us" ]
2024-01-08T10:44:05+00:00
{"license": "cc-by-nc-sa-4.0", "pretty_name": "D"}
2024-01-10T02:20:34+00:00
[]
[]
TAGS #license-cc-by-nc-sa-4.0 #region-us
# DynaBreastLite A lightweight dynamic 4D human breast anthropometric dataset.
[ "# DynaBreastLite\n\nA lightweight dynamic 4D human breast anthropometric dataset." ]
[ "TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n", "# DynaBreastLite\n\nA lightweight dynamic 4D human breast anthropometric dataset." ]
[ 19, 22 ]
[ "passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n# DynaBreastLite\n\nA lightweight dynamic 4D human breast anthropometric dataset." ]
51beac7687878bfee1dcd43ae8795c3004a2dc94
# Dataset Card for Evaluation run of s3nh/nsfw-noromaid-mistral-instruct <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [s3nh/nsfw-noromaid-mistral-instruct](https://huggingface.co/s3nh/nsfw-noromaid-mistral-instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_s3nh__nsfw-noromaid-mistral-instruct", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-08T10:52:46.659107](https://huggingface.co/datasets/open-llm-leaderboard/details_s3nh__nsfw-noromaid-mistral-instruct/blob/main/results_2024-01-08T10-52-46.659107.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.4636397253806667, "acc_stderr": 0.03433097238406634, "acc_norm": 0.4705183598294859, "acc_norm_stderr": 0.03515170311421716, "mc1": 0.2252141982864137, "mc1_stderr": 0.014623240768023495, "mc2": 0.3349241921724532, "mc2_stderr": 0.013441943397542705 }, "harness|arc:challenge|25": { "acc": 0.4786689419795222, "acc_stderr": 0.014598087973127106, "acc_norm": 0.5179180887372014, "acc_norm_stderr": 0.014602005585490976 }, "harness|hellaswag|10": { "acc": 0.5367456681935869, "acc_stderr": 0.0049762883216818215, "acc_norm": 0.7539334793865764, "acc_norm_stderr": 0.004298374936365623 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.28, "acc_stderr": 0.045126085985421296, "acc_norm": 0.28, "acc_norm_stderr": 0.045126085985421296 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.45185185185185184, "acc_stderr": 0.04299268905480864, "acc_norm": 0.45185185185185184, "acc_norm_stderr": 0.04299268905480864 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.5592105263157895, "acc_stderr": 0.04040311062490437, "acc_norm": 0.5592105263157895, "acc_norm_stderr": 0.04040311062490437 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.49, "acc_stderr": 0.05024183937956911, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956911 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.5245283018867924, "acc_stderr": 0.030735822206205615, "acc_norm": 0.5245283018867924, "acc_norm_stderr": 0.030735822206205615 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.5555555555555556, "acc_stderr": 0.041553199555931467, "acc_norm": 0.5555555555555556, "acc_norm_stderr": 0.041553199555931467 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.38, "acc_stderr": 0.048783173121456316, "acc_norm": 0.38, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.4624277456647399, "acc_stderr": 0.0380168510452446, "acc_norm": 0.4624277456647399, "acc_norm_stderr": 0.0380168510452446 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.19607843137254902, "acc_stderr": 0.039505818611799616, "acc_norm": 0.19607843137254902, "acc_norm_stderr": 0.039505818611799616 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.61, "acc_stderr": 0.04902071300001975, "acc_norm": 0.61, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.4425531914893617, "acc_stderr": 0.032469569197899575, "acc_norm": 0.4425531914893617, "acc_norm_stderr": 0.032469569197899575 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.3157894736842105, "acc_stderr": 0.043727482902780064, "acc_norm": 0.3157894736842105, "acc_norm_stderr": 0.043727482902780064 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.46206896551724136, "acc_stderr": 0.04154659671707548, "acc_norm": 0.46206896551724136, "acc_norm_stderr": 0.04154659671707548 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3412698412698413, "acc_stderr": 0.024419234966819074, "acc_norm": 0.3412698412698413, "acc_norm_stderr": 0.024419234966819074 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.23809523809523808, "acc_stderr": 0.03809523809523811, "acc_norm": 0.23809523809523808, "acc_norm_stderr": 0.03809523809523811 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.37, "acc_stderr": 0.048523658709391, "acc_norm": 0.37, "acc_norm_stderr": 0.048523658709391 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.45806451612903226, "acc_stderr": 0.028343787250540618, "acc_norm": 0.45806451612903226, "acc_norm_stderr": 0.028343787250540618 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.39901477832512317, "acc_stderr": 0.03445487686264716, "acc_norm": 0.39901477832512317, "acc_norm_stderr": 0.03445487686264716 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.41, "acc_stderr": 0.04943110704237102, "acc_norm": 0.41, "acc_norm_stderr": 0.04943110704237102 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.2787878787878788, "acc_stderr": 0.03501438706296781, "acc_norm": 0.2787878787878788, "acc_norm_stderr": 0.03501438706296781 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.6060606060606061, "acc_stderr": 0.03481285338232963, "acc_norm": 0.6060606060606061, "acc_norm_stderr": 0.03481285338232963 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.6683937823834197, "acc_stderr": 0.03397636541089118, "acc_norm": 0.6683937823834197, "acc_norm_stderr": 0.03397636541089118 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.4641025641025641, "acc_stderr": 0.025285585990017848, "acc_norm": 0.4641025641025641, "acc_norm_stderr": 0.025285585990017848 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2851851851851852, "acc_stderr": 0.027528599210340492, "acc_norm": 0.2851851851851852, "acc_norm_stderr": 0.027528599210340492 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.46218487394957986, "acc_stderr": 0.032385469487589795, "acc_norm": 0.46218487394957986, "acc_norm_stderr": 0.032385469487589795 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.33112582781456956, "acc_stderr": 0.038425817186598696, "acc_norm": 0.33112582781456956, "acc_norm_stderr": 0.038425817186598696 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.5614678899082569, "acc_stderr": 0.02127471307395458, "acc_norm": 0.5614678899082569, "acc_norm_stderr": 0.02127471307395458 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.3287037037037037, "acc_stderr": 0.03203614084670058, "acc_norm": 0.3287037037037037, "acc_norm_stderr": 0.03203614084670058 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.35294117647058826, "acc_stderr": 0.033540924375915195, "acc_norm": 0.35294117647058826, "acc_norm_stderr": 0.033540924375915195 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.5569620253164557, "acc_stderr": 0.03233532777533484, "acc_norm": 0.5569620253164557, "acc_norm_stderr": 0.03233532777533484 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.5515695067264574, "acc_stderr": 0.03337883736255099, "acc_norm": 0.5515695067264574, "acc_norm_stderr": 0.03337883736255099 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.5648854961832062, "acc_stderr": 0.04348208051644858, "acc_norm": 0.5648854961832062, "acc_norm_stderr": 0.04348208051644858 }, "harness|hendrycksTest-international_law|5": { "acc": 0.6942148760330579, "acc_stderr": 0.04205953933884122, "acc_norm": 0.6942148760330579, "acc_norm_stderr": 0.04205953933884122 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.49074074074074076, "acc_stderr": 0.04832853553437055, "acc_norm": 0.49074074074074076, "acc_norm_stderr": 0.04832853553437055 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.5276073619631901, "acc_stderr": 0.0392237829061099, "acc_norm": 0.5276073619631901, "acc_norm_stderr": 0.0392237829061099 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.39285714285714285, "acc_stderr": 0.04635550135609976, "acc_norm": 0.39285714285714285, "acc_norm_stderr": 0.04635550135609976 }, "harness|hendrycksTest-management|5": { "acc": 0.6310679611650486, "acc_stderr": 0.0477761518115674, "acc_norm": 0.6310679611650486, "acc_norm_stderr": 0.0477761518115674 }, "harness|hendrycksTest-marketing|5": { "acc": 0.6752136752136753, "acc_stderr": 0.03067902276549883, "acc_norm": 0.6752136752136753, "acc_norm_stderr": 0.03067902276549883 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.42, "acc_stderr": 0.04960449637488585, "acc_norm": 0.42, "acc_norm_stderr": 0.04960449637488585 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.669220945083014, "acc_stderr": 0.016824818462563746, "acc_norm": 0.669220945083014, "acc_norm_stderr": 0.016824818462563746 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.5491329479768786, "acc_stderr": 0.026788811931562757, "acc_norm": 0.5491329479768786, "acc_norm_stderr": 0.026788811931562757 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.27039106145251396, "acc_stderr": 0.014854993938010083, "acc_norm": 0.27039106145251396, "acc_norm_stderr": 0.014854993938010083 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.5359477124183006, "acc_stderr": 0.028555827516528784, "acc_norm": 0.5359477124183006, "acc_norm_stderr": 0.028555827516528784 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.5530546623794212, "acc_stderr": 0.028237769422085328, "acc_norm": 0.5530546623794212, "acc_norm_stderr": 0.028237769422085328 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.5339506172839507, "acc_stderr": 0.02775653525734767, "acc_norm": 0.5339506172839507, "acc_norm_stderr": 0.02775653525734767 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.38652482269503546, "acc_stderr": 0.02904919034254347, "acc_norm": 0.38652482269503546, "acc_norm_stderr": 0.02904919034254347 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.2835723598435463, "acc_stderr": 0.011511900775968328, "acc_norm": 0.2835723598435463, "acc_norm_stderr": 0.011511900775968328 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.375, "acc_stderr": 0.029408372932278746, "acc_norm": 0.375, "acc_norm_stderr": 0.029408372932278746 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.4852941176470588, "acc_stderr": 0.020219083895133924, "acc_norm": 0.4852941176470588, "acc_norm_stderr": 0.020219083895133924 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6, "acc_stderr": 0.0469237132203465, "acc_norm": 0.6, "acc_norm_stderr": 0.0469237132203465 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.5224489795918368, "acc_stderr": 0.03197694118713672, "acc_norm": 0.5224489795918368, "acc_norm_stderr": 0.03197694118713672 }, "harness|hendrycksTest-sociology|5": { "acc": 0.527363184079602, "acc_stderr": 0.035302355173346824, "acc_norm": 0.527363184079602, "acc_norm_stderr": 0.035302355173346824 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-virology|5": { "acc": 0.43373493975903615, "acc_stderr": 0.03858158940685517, "acc_norm": 0.43373493975903615, "acc_norm_stderr": 0.03858158940685517 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.672514619883041, "acc_stderr": 0.035993357714560276, "acc_norm": 0.672514619883041, "acc_norm_stderr": 0.035993357714560276 }, "harness|truthfulqa:mc|0": { "mc1": 0.2252141982864137, "mc1_stderr": 0.014623240768023495, "mc2": 0.3349241921724532, "mc2_stderr": 0.013441943397542705 }, "harness|winogrande|5": { "acc": 0.7119179163378059, "acc_stderr": 0.012727884724248115 }, "harness|gsm8k|5": { "acc": 0.06595905989385899, "acc_stderr": 0.006836951192034228 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_s3nh__nsfw-noromaid-mistral-instruct
[ "region:us" ]
2024-01-08T10:55:05+00:00
{"pretty_name": "Evaluation run of s3nh/nsfw-noromaid-mistral-instruct", "dataset_summary": "Dataset automatically created during the evaluation run of model [s3nh/nsfw-noromaid-mistral-instruct](https://huggingface.co/s3nh/nsfw-noromaid-mistral-instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_s3nh__nsfw-noromaid-mistral-instruct\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-08T10:52:46.659107](https://huggingface.co/datasets/open-llm-leaderboard/details_s3nh__nsfw-noromaid-mistral-instruct/blob/main/results_2024-01-08T10-52-46.659107.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4636397253806667,\n \"acc_stderr\": 0.03433097238406634,\n \"acc_norm\": 0.4705183598294859,\n \"acc_norm_stderr\": 0.03515170311421716,\n \"mc1\": 0.2252141982864137,\n \"mc1_stderr\": 0.014623240768023495,\n \"mc2\": 0.3349241921724532,\n \"mc2_stderr\": 0.013441943397542705\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.4786689419795222,\n \"acc_stderr\": 0.014598087973127106,\n \"acc_norm\": 0.5179180887372014,\n \"acc_norm_stderr\": 0.014602005585490976\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5367456681935869,\n \"acc_stderr\": 0.0049762883216818215,\n \"acc_norm\": 0.7539334793865764,\n \"acc_norm_stderr\": 0.004298374936365623\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.045126085985421296,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.045126085985421296\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.45185185185185184,\n \"acc_stderr\": 0.04299268905480864,\n \"acc_norm\": 0.45185185185185184,\n \"acc_norm_stderr\": 0.04299268905480864\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5592105263157895,\n \"acc_stderr\": 0.04040311062490437,\n \"acc_norm\": 0.5592105263157895,\n \"acc_norm_stderr\": 0.04040311062490437\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5245283018867924,\n \"acc_stderr\": 0.030735822206205615,\n \"acc_norm\": 0.5245283018867924,\n \"acc_norm_stderr\": 0.030735822206205615\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.041553199555931467,\n \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.041553199555931467\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4624277456647399,\n \"acc_stderr\": 0.0380168510452446,\n \"acc_norm\": 0.4624277456647399,\n \"acc_norm_stderr\": 0.0380168510452446\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.19607843137254902,\n \"acc_stderr\": 0.039505818611799616,\n \"acc_norm\": 0.19607843137254902,\n \"acc_norm_stderr\": 0.039505818611799616\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.4425531914893617,\n \"acc_stderr\": 0.032469569197899575,\n \"acc_norm\": 0.4425531914893617,\n \"acc_norm_stderr\": 0.032469569197899575\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.3157894736842105,\n \"acc_stderr\": 0.043727482902780064,\n \"acc_norm\": 0.3157894736842105,\n \"acc_norm_stderr\": 0.043727482902780064\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.46206896551724136,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.46206896551724136,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3412698412698413,\n \"acc_stderr\": 0.024419234966819074,\n \"acc_norm\": 0.3412698412698413,\n \"acc_norm_stderr\": 0.024419234966819074\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.23809523809523808,\n \"acc_stderr\": 0.03809523809523811,\n \"acc_norm\": 0.23809523809523808,\n \"acc_norm_stderr\": 0.03809523809523811\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.45806451612903226,\n \"acc_stderr\": 0.028343787250540618,\n \"acc_norm\": 0.45806451612903226,\n \"acc_norm_stderr\": 0.028343787250540618\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.39901477832512317,\n \"acc_stderr\": 0.03445487686264716,\n \"acc_norm\": 0.39901477832512317,\n \"acc_norm_stderr\": 0.03445487686264716\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.2787878787878788,\n \"acc_stderr\": 0.03501438706296781,\n \"acc_norm\": 0.2787878787878788,\n \"acc_norm_stderr\": 0.03501438706296781\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.6060606060606061,\n \"acc_stderr\": 0.03481285338232963,\n \"acc_norm\": 0.6060606060606061,\n \"acc_norm_stderr\": 0.03481285338232963\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.6683937823834197,\n \"acc_stderr\": 0.03397636541089118,\n \"acc_norm\": 0.6683937823834197,\n \"acc_norm_stderr\": 0.03397636541089118\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.4641025641025641,\n \"acc_stderr\": 0.025285585990017848,\n \"acc_norm\": 0.4641025641025641,\n \"acc_norm_stderr\": 0.025285585990017848\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2851851851851852,\n \"acc_stderr\": 0.027528599210340492,\n \"acc_norm\": 0.2851851851851852,\n \"acc_norm_stderr\": 0.027528599210340492\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.46218487394957986,\n \"acc_stderr\": 0.032385469487589795,\n \"acc_norm\": 0.46218487394957986,\n \"acc_norm_stderr\": 0.032385469487589795\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.5614678899082569,\n \"acc_stderr\": 0.02127471307395458,\n \"acc_norm\": 0.5614678899082569,\n \"acc_norm_stderr\": 0.02127471307395458\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.3287037037037037,\n \"acc_stderr\": 0.03203614084670058,\n \"acc_norm\": 0.3287037037037037,\n \"acc_norm_stderr\": 0.03203614084670058\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.35294117647058826,\n \"acc_stderr\": 0.033540924375915195,\n \"acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.033540924375915195\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.5569620253164557,\n \"acc_stderr\": 0.03233532777533484,\n \"acc_norm\": 0.5569620253164557,\n \"acc_norm_stderr\": 0.03233532777533484\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5515695067264574,\n \"acc_stderr\": 0.03337883736255099,\n \"acc_norm\": 0.5515695067264574,\n \"acc_norm_stderr\": 0.03337883736255099\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5648854961832062,\n \"acc_stderr\": 0.04348208051644858,\n \"acc_norm\": 0.5648854961832062,\n \"acc_norm_stderr\": 0.04348208051644858\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6942148760330579,\n \"acc_stderr\": 0.04205953933884122,\n \"acc_norm\": 0.6942148760330579,\n \"acc_norm_stderr\": 0.04205953933884122\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.49074074074074076,\n \"acc_stderr\": 0.04832853553437055,\n \"acc_norm\": 0.49074074074074076,\n \"acc_norm_stderr\": 0.04832853553437055\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.5276073619631901,\n \"acc_stderr\": 0.0392237829061099,\n \"acc_norm\": 0.5276073619631901,\n \"acc_norm_stderr\": 0.0392237829061099\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.39285714285714285,\n \"acc_stderr\": 0.04635550135609976,\n \"acc_norm\": 0.39285714285714285,\n \"acc_norm_stderr\": 0.04635550135609976\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6310679611650486,\n \"acc_stderr\": 0.0477761518115674,\n \"acc_norm\": 0.6310679611650486,\n \"acc_norm_stderr\": 0.0477761518115674\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.6752136752136753,\n \"acc_stderr\": 0.03067902276549883,\n \"acc_norm\": 0.6752136752136753,\n \"acc_norm_stderr\": 0.03067902276549883\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.04960449637488585,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.04960449637488585\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.669220945083014,\n \"acc_stderr\": 0.016824818462563746,\n \"acc_norm\": 0.669220945083014,\n \"acc_norm_stderr\": 0.016824818462563746\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5491329479768786,\n \"acc_stderr\": 0.026788811931562757,\n \"acc_norm\": 0.5491329479768786,\n \"acc_norm_stderr\": 0.026788811931562757\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.27039106145251396,\n \"acc_stderr\": 0.014854993938010083,\n \"acc_norm\": 0.27039106145251396,\n \"acc_norm_stderr\": 0.014854993938010083\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5359477124183006,\n \"acc_stderr\": 0.028555827516528784,\n \"acc_norm\": 0.5359477124183006,\n \"acc_norm_stderr\": 0.028555827516528784\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5530546623794212,\n \"acc_stderr\": 0.028237769422085328,\n \"acc_norm\": 0.5530546623794212,\n \"acc_norm_stderr\": 0.028237769422085328\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5339506172839507,\n \"acc_stderr\": 0.02775653525734767,\n \"acc_norm\": 0.5339506172839507,\n \"acc_norm_stderr\": 0.02775653525734767\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.38652482269503546,\n \"acc_stderr\": 0.02904919034254347,\n \"acc_norm\": 0.38652482269503546,\n \"acc_norm_stderr\": 0.02904919034254347\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2835723598435463,\n \"acc_stderr\": 0.011511900775968328,\n \"acc_norm\": 0.2835723598435463,\n \"acc_norm_stderr\": 0.011511900775968328\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.375,\n \"acc_stderr\": 0.029408372932278746,\n \"acc_norm\": 0.375,\n \"acc_norm_stderr\": 0.029408372932278746\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.4852941176470588,\n \"acc_stderr\": 0.020219083895133924,\n \"acc_norm\": 0.4852941176470588,\n \"acc_norm_stderr\": 0.020219083895133924\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.0469237132203465,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.0469237132203465\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5224489795918368,\n \"acc_stderr\": 0.03197694118713672,\n \"acc_norm\": 0.5224489795918368,\n \"acc_norm_stderr\": 0.03197694118713672\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.527363184079602,\n \"acc_stderr\": 0.035302355173346824,\n \"acc_norm\": 0.527363184079602,\n \"acc_norm_stderr\": 0.035302355173346824\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.43373493975903615,\n \"acc_stderr\": 0.03858158940685517,\n \"acc_norm\": 0.43373493975903615,\n \"acc_norm_stderr\": 0.03858158940685517\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.672514619883041,\n \"acc_stderr\": 0.035993357714560276,\n \"acc_norm\": 0.672514619883041,\n \"acc_norm_stderr\": 0.035993357714560276\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2252141982864137,\n \"mc1_stderr\": 0.014623240768023495,\n \"mc2\": 0.3349241921724532,\n \"mc2_stderr\": 0.013441943397542705\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7119179163378059,\n \"acc_stderr\": 0.012727884724248115\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.06595905989385899,\n \"acc_stderr\": 0.006836951192034228\n }\n}\n```", "repo_url": "https://huggingface.co/s3nh/nsfw-noromaid-mistral-instruct", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|arc:challenge|25_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|gsm8k|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hellaswag|10_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-08T10-52-46.659107.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["**/details_harness|winogrande|5_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-08T10-52-46.659107.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_08T10_52_46.659107", "path": ["results_2024-01-08T10-52-46.659107.parquet"]}, {"split": "latest", "path": ["results_2024-01-08T10-52-46.659107.parquet"]}]}]}
2024-01-08T10:55:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of s3nh/nsfw-noromaid-mistral-instruct Dataset automatically created during the evaluation run of model s3nh/nsfw-noromaid-mistral-instruct on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-08T10:52:46.659107(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of s3nh/nsfw-noromaid-mistral-instruct\n\n\n\nDataset automatically created during the evaluation run of model s3nh/nsfw-noromaid-mistral-instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-08T10:52:46.659107(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of s3nh/nsfw-noromaid-mistral-instruct\n\n\n\nDataset automatically created during the evaluation run of model s3nh/nsfw-noromaid-mistral-instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-08T10:52:46.659107(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 195, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of s3nh/nsfw-noromaid-mistral-instruct\n\n\n\nDataset automatically created during the evaluation run of model s3nh/nsfw-noromaid-mistral-instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-08T10:52:46.659107(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
16184c21a2074cb9923849aeb489275461d39ac3
# Dataset Card for Evaluation run of s3nh/Noromaid-Aeryth-7B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [s3nh/Noromaid-Aeryth-7B](https://huggingface.co/s3nh/Noromaid-Aeryth-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_s3nh__Noromaid-Aeryth-7B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-08T11:15:33.094144](https://huggingface.co/datasets/open-llm-leaderboard/details_s3nh__Noromaid-Aeryth-7B/blob/main/results_2024-01-08T11-15-33.094144.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5686288233051527, "acc_stderr": 0.03387968821236966, "acc_norm": 0.5764553734191823, "acc_norm_stderr": 0.03462463060484352, "mc1": 0.48714810281517745, "mc1_stderr": 0.017497717944299825, "mc2": 0.6565799618870065, "mc2_stderr": 0.01582378775792379 }, "harness|arc:challenge|25": { "acc": 0.5392491467576792, "acc_stderr": 0.014566303676636584, "acc_norm": 0.5674061433447098, "acc_norm_stderr": 0.014478005694182523 }, "harness|hellaswag|10": { "acc": 0.6040629356701852, "acc_stderr": 0.00488051543132316, "acc_norm": 0.786197968532165, "acc_norm_stderr": 0.004091507850559578 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.28, "acc_stderr": 0.04512608598542128, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.562962962962963, "acc_stderr": 0.04284958639753401, "acc_norm": 0.562962962962963, "acc_norm_stderr": 0.04284958639753401 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.5921052631578947, "acc_stderr": 0.039993097127774734, "acc_norm": 0.5921052631578947, "acc_norm_stderr": 0.039993097127774734 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.56, "acc_stderr": 0.04988876515698589, "acc_norm": 0.56, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6226415094339622, "acc_stderr": 0.02983280811479601, "acc_norm": 0.6226415094339622, "acc_norm_stderr": 0.02983280811479601 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.625, "acc_stderr": 0.04048439222695598, "acc_norm": 0.625, "acc_norm_stderr": 0.04048439222695598 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.46, "acc_stderr": 0.05009082659620333, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.35, "acc_stderr": 0.0479372485441102, "acc_norm": 0.35, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.5549132947976878, "acc_stderr": 0.03789401760283647, "acc_norm": 0.5549132947976878, "acc_norm_stderr": 0.03789401760283647 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.38235294117647056, "acc_stderr": 0.04835503696107223, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.04835503696107223 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.62, "acc_stderr": 0.048783173121456316, "acc_norm": 0.62, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.48936170212765956, "acc_stderr": 0.03267862331014063, "acc_norm": 0.48936170212765956, "acc_norm_stderr": 0.03267862331014063 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4298245614035088, "acc_stderr": 0.04657047260594963, "acc_norm": 0.4298245614035088, "acc_norm_stderr": 0.04657047260594963 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5655172413793104, "acc_stderr": 0.04130740879555497, "acc_norm": 0.5655172413793104, "acc_norm_stderr": 0.04130740879555497 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.37037037037037035, "acc_stderr": 0.024870815251057075, "acc_norm": 0.37037037037037035, "acc_norm_stderr": 0.024870815251057075 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.3888888888888889, "acc_stderr": 0.04360314860077459, "acc_norm": 0.3888888888888889, "acc_norm_stderr": 0.04360314860077459 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.38, "acc_stderr": 0.048783173121456316, "acc_norm": 0.38, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.6806451612903226, "acc_stderr": 0.026522709674667765, "acc_norm": 0.6806451612903226, "acc_norm_stderr": 0.026522709674667765 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.45320197044334976, "acc_stderr": 0.03502544650845872, "acc_norm": 0.45320197044334976, "acc_norm_stderr": 0.03502544650845872 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.61, "acc_stderr": 0.04902071300001975, "acc_norm": 0.61, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.6363636363636364, "acc_stderr": 0.03756335775187897, "acc_norm": 0.6363636363636364, "acc_norm_stderr": 0.03756335775187897 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7575757575757576, "acc_stderr": 0.03053289223393202, "acc_norm": 0.7575757575757576, "acc_norm_stderr": 0.03053289223393202 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8031088082901554, "acc_stderr": 0.02869787397186067, "acc_norm": 0.8031088082901554, "acc_norm_stderr": 0.02869787397186067 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5384615384615384, "acc_stderr": 0.025275892070240644, "acc_norm": 0.5384615384615384, "acc_norm_stderr": 0.025275892070240644 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3592592592592593, "acc_stderr": 0.029252905927251976, "acc_norm": 0.3592592592592593, "acc_norm_stderr": 0.029252905927251976 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.5756302521008403, "acc_stderr": 0.032104790510157764, "acc_norm": 0.5756302521008403, "acc_norm_stderr": 0.032104790510157764 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.33112582781456956, "acc_stderr": 0.038425817186598696, "acc_norm": 0.33112582781456956, "acc_norm_stderr": 0.038425817186598696 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7467889908256881, "acc_stderr": 0.01864407304137504, "acc_norm": 0.7467889908256881, "acc_norm_stderr": 0.01864407304137504 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4398148148148148, "acc_stderr": 0.03385177976044811, "acc_norm": 0.4398148148148148, "acc_norm_stderr": 0.03385177976044811 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7107843137254902, "acc_stderr": 0.031822318676475544, "acc_norm": 0.7107843137254902, "acc_norm_stderr": 0.031822318676475544 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.6666666666666666, "acc_stderr": 0.0306858205966108, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.0306858205966108 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6233183856502242, "acc_stderr": 0.032521134899291884, "acc_norm": 0.6233183856502242, "acc_norm_stderr": 0.032521134899291884 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.6412213740458015, "acc_stderr": 0.04206739313864908, "acc_norm": 0.6412213740458015, "acc_norm_stderr": 0.04206739313864908 }, "harness|hendrycksTest-international_law|5": { "acc": 0.71900826446281, "acc_stderr": 0.04103203830514512, "acc_norm": 0.71900826446281, "acc_norm_stderr": 0.04103203830514512 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.6388888888888888, "acc_stderr": 0.04643454608906275, "acc_norm": 0.6388888888888888, "acc_norm_stderr": 0.04643454608906275 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.6748466257668712, "acc_stderr": 0.036803503712864616, "acc_norm": 0.6748466257668712, "acc_norm_stderr": 0.036803503712864616 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.49107142857142855, "acc_stderr": 0.04745033255489123, "acc_norm": 0.49107142857142855, "acc_norm_stderr": 0.04745033255489123 }, "harness|hendrycksTest-management|5": { "acc": 0.7378640776699029, "acc_stderr": 0.04354631077260595, "acc_norm": 0.7378640776699029, "acc_norm_stderr": 0.04354631077260595 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8376068376068376, "acc_stderr": 0.02416161812798774, "acc_norm": 0.8376068376068376, "acc_norm_stderr": 0.02416161812798774 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7254150702426565, "acc_stderr": 0.015959829933084046, "acc_norm": 0.7254150702426565, "acc_norm_stderr": 0.015959829933084046 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.5924855491329479, "acc_stderr": 0.026454578146931505, "acc_norm": 0.5924855491329479, "acc_norm_stderr": 0.026454578146931505 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.37318435754189944, "acc_stderr": 0.016175692013381943, "acc_norm": 0.37318435754189944, "acc_norm_stderr": 0.016175692013381943 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6078431372549019, "acc_stderr": 0.027956046165424523, "acc_norm": 0.6078431372549019, "acc_norm_stderr": 0.027956046165424523 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6559485530546624, "acc_stderr": 0.026981478043648043, "acc_norm": 0.6559485530546624, "acc_norm_stderr": 0.026981478043648043 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6141975308641975, "acc_stderr": 0.027085401226132143, "acc_norm": 0.6141975308641975, "acc_norm_stderr": 0.027085401226132143 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.39361702127659576, "acc_stderr": 0.02914454478159615, "acc_norm": 0.39361702127659576, "acc_norm_stderr": 0.02914454478159615 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.39895697522816165, "acc_stderr": 0.012506757655293676, "acc_norm": 0.39895697522816165, "acc_norm_stderr": 0.012506757655293676 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.5330882352941176, "acc_stderr": 0.030306257722468307, "acc_norm": 0.5330882352941176, "acc_norm_stderr": 0.030306257722468307 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.545751633986928, "acc_stderr": 0.020142974553795205, "acc_norm": 0.545751633986928, "acc_norm_stderr": 0.020142974553795205 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6272727272727273, "acc_stderr": 0.04631381319425465, "acc_norm": 0.6272727272727273, "acc_norm_stderr": 0.04631381319425465 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6326530612244898, "acc_stderr": 0.030862144921087555, "acc_norm": 0.6326530612244898, "acc_norm_stderr": 0.030862144921087555 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8159203980099502, "acc_stderr": 0.02740385941078684, "acc_norm": 0.8159203980099502, "acc_norm_stderr": 0.02740385941078684 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.83, "acc_stderr": 0.0377525168068637, "acc_norm": 0.83, "acc_norm_stderr": 0.0377525168068637 }, "harness|hendrycksTest-virology|5": { "acc": 0.45180722891566266, "acc_stderr": 0.038743715565879536, "acc_norm": 0.45180722891566266, "acc_norm_stderr": 0.038743715565879536 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7719298245614035, "acc_stderr": 0.032180937956023566, "acc_norm": 0.7719298245614035, "acc_norm_stderr": 0.032180937956023566 }, "harness|truthfulqa:mc|0": { "mc1": 0.48714810281517745, "mc1_stderr": 0.017497717944299825, "mc2": 0.6565799618870065, "mc2_stderr": 0.01582378775792379 }, "harness|winogrande|5": { "acc": 0.7182320441988951, "acc_stderr": 0.012643326011852944 }, "harness|gsm8k|5": { "acc": 0.16755117513267628, "acc_stderr": 0.010287143693711216 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_s3nh__Noromaid-Aeryth-7B
[ "region:us" ]
2024-01-08T11:06:20+00:00
{"pretty_name": "Evaluation run of s3nh/Noromaid-Aeryth-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [s3nh/Noromaid-Aeryth-7B](https://huggingface.co/s3nh/Noromaid-Aeryth-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_s3nh__Noromaid-Aeryth-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-08T11:15:33.094144](https://huggingface.co/datasets/open-llm-leaderboard/details_s3nh__Noromaid-Aeryth-7B/blob/main/results_2024-01-08T11-15-33.094144.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5686288233051527,\n \"acc_stderr\": 0.03387968821236966,\n \"acc_norm\": 0.5764553734191823,\n \"acc_norm_stderr\": 0.03462463060484352,\n \"mc1\": 0.48714810281517745,\n \"mc1_stderr\": 0.017497717944299825,\n \"mc2\": 0.6565799618870065,\n \"mc2_stderr\": 0.01582378775792379\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5392491467576792,\n \"acc_stderr\": 0.014566303676636584,\n \"acc_norm\": 0.5674061433447098,\n \"acc_norm_stderr\": 0.014478005694182523\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6040629356701852,\n \"acc_stderr\": 0.00488051543132316,\n \"acc_norm\": 0.786197968532165,\n \"acc_norm_stderr\": 0.004091507850559578\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5921052631578947,\n \"acc_stderr\": 0.039993097127774734,\n \"acc_norm\": 0.5921052631578947,\n \"acc_norm_stderr\": 0.039993097127774734\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6226415094339622,\n \"acc_stderr\": 0.02983280811479601,\n \"acc_norm\": 0.6226415094339622,\n \"acc_norm_stderr\": 0.02983280811479601\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.625,\n \"acc_stderr\": 0.04048439222695598,\n \"acc_norm\": 0.625,\n \"acc_norm_stderr\": 0.04048439222695598\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5549132947976878,\n \"acc_stderr\": 0.03789401760283647,\n \"acc_norm\": 0.5549132947976878,\n \"acc_norm_stderr\": 0.03789401760283647\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.48936170212765956,\n \"acc_stderr\": 0.03267862331014063,\n \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.03267862331014063\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4298245614035088,\n \"acc_stderr\": 0.04657047260594963,\n \"acc_norm\": 0.4298245614035088,\n \"acc_norm_stderr\": 0.04657047260594963\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555497,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555497\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.37037037037037035,\n \"acc_stderr\": 0.024870815251057075,\n \"acc_norm\": 0.37037037037037035,\n \"acc_norm_stderr\": 0.024870815251057075\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.04360314860077459,\n \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.04360314860077459\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6806451612903226,\n \"acc_stderr\": 0.026522709674667765,\n \"acc_norm\": 0.6806451612903226,\n \"acc_norm_stderr\": 0.026522709674667765\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.45320197044334976,\n \"acc_stderr\": 0.03502544650845872,\n \"acc_norm\": 0.45320197044334976,\n \"acc_norm_stderr\": 0.03502544650845872\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6363636363636364,\n \"acc_stderr\": 0.03756335775187897,\n \"acc_norm\": 0.6363636363636364,\n \"acc_norm_stderr\": 0.03756335775187897\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03053289223393202,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03053289223393202\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8031088082901554,\n \"acc_stderr\": 0.02869787397186067,\n \"acc_norm\": 0.8031088082901554,\n \"acc_norm_stderr\": 0.02869787397186067\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5384615384615384,\n \"acc_stderr\": 0.025275892070240644,\n \"acc_norm\": 0.5384615384615384,\n \"acc_norm_stderr\": 0.025275892070240644\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3592592592592593,\n \"acc_stderr\": 0.029252905927251976,\n \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.029252905927251976\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5756302521008403,\n \"acc_stderr\": 0.032104790510157764,\n \"acc_norm\": 0.5756302521008403,\n \"acc_norm_stderr\": 0.032104790510157764\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7467889908256881,\n \"acc_stderr\": 0.01864407304137504,\n \"acc_norm\": 0.7467889908256881,\n \"acc_norm_stderr\": 0.01864407304137504\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4398148148148148,\n \"acc_stderr\": 0.03385177976044811,\n \"acc_norm\": 0.4398148148148148,\n \"acc_norm_stderr\": 0.03385177976044811\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7107843137254902,\n \"acc_stderr\": 0.031822318676475544,\n \"acc_norm\": 0.7107843137254902,\n \"acc_norm_stderr\": 0.031822318676475544\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.0306858205966108,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.0306858205966108\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6233183856502242,\n \"acc_stderr\": 0.032521134899291884,\n \"acc_norm\": 0.6233183856502242,\n \"acc_norm_stderr\": 0.032521134899291884\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6412213740458015,\n \"acc_stderr\": 0.04206739313864908,\n \"acc_norm\": 0.6412213740458015,\n \"acc_norm_stderr\": 0.04206739313864908\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.71900826446281,\n \"acc_stderr\": 0.04103203830514512,\n \"acc_norm\": 0.71900826446281,\n \"acc_norm_stderr\": 0.04103203830514512\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6388888888888888,\n \"acc_stderr\": 0.04643454608906275,\n \"acc_norm\": 0.6388888888888888,\n \"acc_norm_stderr\": 0.04643454608906275\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6748466257668712,\n \"acc_stderr\": 0.036803503712864616,\n \"acc_norm\": 0.6748466257668712,\n \"acc_norm_stderr\": 0.036803503712864616\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7378640776699029,\n \"acc_stderr\": 0.04354631077260595,\n \"acc_norm\": 0.7378640776699029,\n \"acc_norm_stderr\": 0.04354631077260595\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8376068376068376,\n \"acc_stderr\": 0.02416161812798774,\n \"acc_norm\": 0.8376068376068376,\n \"acc_norm_stderr\": 0.02416161812798774\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7254150702426565,\n \"acc_stderr\": 0.015959829933084046,\n \"acc_norm\": 0.7254150702426565,\n \"acc_norm_stderr\": 0.015959829933084046\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5924855491329479,\n \"acc_stderr\": 0.026454578146931505,\n \"acc_norm\": 0.5924855491329479,\n \"acc_norm_stderr\": 0.026454578146931505\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.37318435754189944,\n \"acc_stderr\": 0.016175692013381943,\n \"acc_norm\": 0.37318435754189944,\n \"acc_norm_stderr\": 0.016175692013381943\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6078431372549019,\n \"acc_stderr\": 0.027956046165424523,\n \"acc_norm\": 0.6078431372549019,\n \"acc_norm_stderr\": 0.027956046165424523\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6559485530546624,\n \"acc_stderr\": 0.026981478043648043,\n \"acc_norm\": 0.6559485530546624,\n \"acc_norm_stderr\": 0.026981478043648043\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6141975308641975,\n \"acc_stderr\": 0.027085401226132143,\n \"acc_norm\": 0.6141975308641975,\n \"acc_norm_stderr\": 0.027085401226132143\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.39361702127659576,\n \"acc_stderr\": 0.02914454478159615,\n \"acc_norm\": 0.39361702127659576,\n \"acc_norm_stderr\": 0.02914454478159615\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.39895697522816165,\n \"acc_stderr\": 0.012506757655293676,\n \"acc_norm\": 0.39895697522816165,\n \"acc_norm_stderr\": 0.012506757655293676\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5330882352941176,\n \"acc_stderr\": 0.030306257722468307,\n \"acc_norm\": 0.5330882352941176,\n \"acc_norm_stderr\": 0.030306257722468307\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.545751633986928,\n \"acc_stderr\": 0.020142974553795205,\n \"acc_norm\": 0.545751633986928,\n \"acc_norm_stderr\": 0.020142974553795205\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6272727272727273,\n \"acc_stderr\": 0.04631381319425465,\n \"acc_norm\": 0.6272727272727273,\n \"acc_norm_stderr\": 0.04631381319425465\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6326530612244898,\n \"acc_stderr\": 0.030862144921087555,\n \"acc_norm\": 0.6326530612244898,\n \"acc_norm_stderr\": 0.030862144921087555\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8159203980099502,\n \"acc_stderr\": 0.02740385941078684,\n \"acc_norm\": 0.8159203980099502,\n \"acc_norm_stderr\": 0.02740385941078684\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.45180722891566266,\n \"acc_stderr\": 0.038743715565879536,\n \"acc_norm\": 0.45180722891566266,\n \"acc_norm_stderr\": 0.038743715565879536\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7719298245614035,\n \"acc_stderr\": 0.032180937956023566,\n \"acc_norm\": 0.7719298245614035,\n \"acc_norm_stderr\": 0.032180937956023566\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.48714810281517745,\n \"mc1_stderr\": 0.017497717944299825,\n \"mc2\": 0.6565799618870065,\n \"mc2_stderr\": 0.01582378775792379\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7182320441988951,\n \"acc_stderr\": 0.012643326011852944\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.16755117513267628,\n \"acc_stderr\": 0.010287143693711216\n }\n}\n```", "repo_url": "https://huggingface.co/s3nh/Noromaid-Aeryth-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|arc:challenge|25_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|arc:challenge|25_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|gsm8k|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|gsm8k|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hellaswag|10_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hellaswag|10_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-08T11-03-59.898872.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-08T11-15-33.094144.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["**/details_harness|winogrande|5_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["**/details_harness|winogrande|5_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-08T11-15-33.094144.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_08T11_03_59.898872", "path": ["results_2024-01-08T11-03-59.898872.parquet"]}, {"split": "2024_01_08T11_15_33.094144", "path": ["results_2024-01-08T11-15-33.094144.parquet"]}, {"split": "latest", "path": ["results_2024-01-08T11-15-33.094144.parquet"]}]}]}
2024-01-08T11:17:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of s3nh/Noromaid-Aeryth-7B Dataset automatically created during the evaluation run of model s3nh/Noromaid-Aeryth-7B on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-08T11:15:33.094144(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of s3nh/Noromaid-Aeryth-7B\n\n\n\nDataset automatically created during the evaluation run of model s3nh/Noromaid-Aeryth-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-08T11:15:33.094144(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of s3nh/Noromaid-Aeryth-7B\n\n\n\nDataset automatically created during the evaluation run of model s3nh/Noromaid-Aeryth-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-08T11:15:33.094144(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 187, 68, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of s3nh/Noromaid-Aeryth-7B\n\n\n\nDataset automatically created during the evaluation run of model s3nh/Noromaid-Aeryth-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-08T11:15:33.094144(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
d0e40f5b476085d54719a3d03ab7db66ed9749fb
- [alpaca_eval](https://huggingface.co/datasets/tatsu-lab/alpaca_eval) 데이터의 instruction을 GPT-4를 이용해서 번역, 이후 GPT-4를 이용해서 대답 생성 - 아직 데이터 검수중!!!!
heegyu/alpaca_eval_ko
[ "license:apache-2.0", "region:us" ]
2024-01-08T11:06:51+00:00
{"license": "apache-2.0"}
2024-01-09T12:55:02+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
- alpaca_eval 데이터의 instruction을 GPT-4를 이용해서 번역, 이후 GPT-4를 이용해서 대답 생성 - 아직 데이터 검수중!!!!
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
b9a0b634f9641e255691355918c3859ea02b1009
<div align="center"> <h1> TeleChat预训练数据集(TeleChat-PTD) </h1> </div> <p align="center"> 🤗 <a href="https://huggingface.co/Tele-AI/Telechat-7B" target="_blank">Hugging Face</a> • 🏔 <a href="" target="_blank">MindSpore</a>️ • 🦉 <a href="https://github.com/Tele-AI/Telechat" target="_blank">github</a>️ • 🐾 <a href="https://gitee.com/Tele-AI/tele-chat" target="_blank">gitee</a>️ • 💬 <a href="https://github.com/Tele-AI/Telechat/blob/master/images/wechat.jpg" target="_blank">WeChat</a> </p> <p align="center"> <a href="https://arxiv.org/abs/2401.03804" target="_blank"> Tech Report </a> </p> # 数据介绍 TeleChat-PTD 是由电信星辰大模型**TeleChat**预训练语料中抽取出的的综合性大规模中文数据集。数据主要来源于网页、书籍、官方媒体等。 我们使用规则+模型的方式进行了相关的过滤,并对数据进行了相似性去重,尽可能地提取出高质量地数据。 TeleChat-PTD 数据集大约公开了2.7亿条数据,数据由纯中文文本构成,原始大小约1TB,压缩后480G,共189个文件。数据集中已经去除了其它冗余信息。 # 数据下载 huggingface下载地址:TODO 天翼云盘下载地址:TODO # 数据格式 数据为jsonl格式,仅有一个字段data: 单条处理后的预训练数据 # 数据清洗 数据清洗的工作流程主要是:规则筛选和清洗、去重、高质量数据筛选、数据安全处理这四个步骤。 - 规则筛选主要是一些通用的规则和启发式规则,例如对字数长度的筛选等等。 - 去重主要使用相似度去重来将过于相似重复的数据删除 - 高质量筛选主要使用了BERT、GPT2等模型对数据进行打分筛选出高质量数据 - 数据清洗主要是针对不良数据进行了识别和去除。 # 声明、协议、引用 ### 声明 我们在此声明,不要使用TeleChat模型及其衍生模型进行任何危害国家社会安全或违法的活动。同时,我们也要求使用者不要将TeleChat模型用于没有安全审查和备案的互联网服务。我们希望所有使用者遵守上述原则,确保科技发展在合法合规的环境下进行。 我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用TeleChat开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 ### 协议 社区使用 TeleChat 模型需要遵循《[TeleChat模型社区许可协议](./TeleChat模型社区许可协议.pdf)》。TeleChat模型支持商业用途,如果您计划将 TeleChat 模型或其衍生品用于商业目的,您需要通过以下联系邮箱 [email protected],提交《TeleChat模型社区许可协议》要求的申请材料。审核通过后,将特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销的商用版权许可。 ### 引用 如需引用我们的工作,请使用如下 reference: ``` @misc{wang2024telechat, title={TeleChat Technical Report}, author={Zihan Wang and Xinzhang Liu and Shixuan Liu and Yitong Yao and Yuyao Huang and Zhongjiang He and Xuelong Li and Yongxiang Li and Zhonghao Che and Zhaoxi Zhang and Yan Wang and Xin Wang and Luwen Pu and Huihan Xu and Ruiyu Fang and Yu Zhao and Jie Zhang and Xiaomeng Huang and Zhilong Lu and Jiaxin Peng and Wenjun Zheng and Shiquan Wang and Bingkai Yang and Xuewei he and Zhuoru Jiang and Qiyi Xie and Yanhan Zhang and Zhongqiu Li and Lingling Shi and Weiwei Fu and Yin Zhang and Zilu Huang and Sishi Xiong and Yuxiang Zhang and Chao Wang and Shuangyong Song}, year={2024}, eprint={2401.03804}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Tele-AI/TeleChat-PTD
[ "license:apache-2.0", "arxiv:2401.03804", "region:us" ]
2024-01-08T11:17:58+00:00
{"license": "apache-2.0", "viewer": false}
2024-01-13T04:51:01+00:00
[ "2401.03804" ]
[]
TAGS #license-apache-2.0 #arxiv-2401.03804 #region-us
<div align="center"> <h1> TeleChat预训练数据集(TeleChat-PTD) </h1> </div> <p align="center"> <a href="URL target="_blank">Hugging Face</a> • <a href="" target="_blank">MindSpore</a>️ • <a href="URL target="_blank">github</a>️ • <a href="URL target="_blank">gitee</a>️ • <a href="URL target="_blank">WeChat</a> </p> <p align="center"> <a href="URL target="_blank"> Tech Report </a> </p> # 数据介绍 TeleChat-PTD 是由电信星辰大模型TeleChat预训练语料中抽取出的的综合性大规模中文数据集。数据主要来源于网页、书籍、官方媒体等。 我们使用规则+模型的方式进行了相关的过滤,并对数据进行了相似性去重,尽可能地提取出高质量地数据。 TeleChat-PTD 数据集大约公开了2.7亿条数据,数据由纯中文文本构成,原始大小约1TB,压缩后480G,共189个文件。数据集中已经去除了其它冗余信息。 # 数据下载 huggingface下载地址:TODO 天翼云盘下载地址:TODO # 数据格式 数据为jsonl格式,仅有一个字段data: 单条处理后的预训练数据 # 数据清洗 数据清洗的工作流程主要是:规则筛选和清洗、去重、高质量数据筛选、数据安全处理这四个步骤。 - 规则筛选主要是一些通用的规则和启发式规则,例如对字数长度的筛选等等。 - 去重主要使用相似度去重来将过于相似重复的数据删除 - 高质量筛选主要使用了BERT、GPT2等模型对数据进行打分筛选出高质量数据 - 数据清洗主要是针对不良数据进行了识别和去除。 # 声明、协议、引用 ### 声明 我们在此声明,不要使用TeleChat模型及其衍生模型进行任何危害国家社会安全或违法的活动。同时,我们也要求使用者不要将TeleChat模型用于没有安全审查和备案的互联网服务。我们希望所有使用者遵守上述原则,确保科技发展在合法合规的环境下进行。 我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用TeleChat开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 ### 协议 社区使用 TeleChat 模型需要遵循《TeleChat模型社区许可协议》。TeleChat模型支持商业用途,如果您计划将 TeleChat 模型或其衍生品用于商业目的,您需要通过以下联系邮箱 tele_ai@URL,提交《TeleChat模型社区许可协议》要求的申请材料。审核通过后,将特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销的商用版权许可。 ### 引用 如需引用我们的工作,请使用如下 reference:
[ "# 数据介绍\nTeleChat-PTD 是由电信星辰大模型TeleChat预训练语料中抽取出的的综合性大规模中文数据集。数据主要来源于网页、书籍、官方媒体等。 我们使用规则+模型的方式进行了相关的过滤,并对数据进行了相似性去重,尽可能地提取出高质量地数据。\n\nTeleChat-PTD 数据集大约公开了2.7亿条数据,数据由纯中文文本构成,原始大小约1TB,压缩后480G,共189个文件。数据集中已经去除了其它冗余信息。", "# 数据下载\n\nhuggingface下载地址:TODO\n\n天翼云盘下载地址:TODO", "# 数据格式\n数据为jsonl格式,仅有一个字段data: 单条处理后的预训练数据", "# 数据清洗\n数据清洗的工作流程主要是:规则筛选和清洗、去重、高质量数据筛选、数据安全处理这四个步骤。\n\n- 规则筛选主要是一些通用的规则和启发式规则,例如对字数长度的筛选等等。\n- 去重主要使用相似度去重来将过于相似重复的数据删除\n- 高质量筛选主要使用了BERT、GPT2等模型对数据进行打分筛选出高质量数据\n- 数据清洗主要是针对不良数据进行了识别和去除。", "# 声明、协议、引用", "### 声明\n我们在此声明,不要使用TeleChat模型及其衍生模型进行任何危害国家社会安全或违法的活动。同时,我们也要求使用者不要将TeleChat模型用于没有安全审查和备案的互联网服务。我们希望所有使用者遵守上述原则,确保科技发展在合法合规的环境下进行。\n\n我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用TeleChat开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。", "### 协议\n社区使用 TeleChat 模型需要遵循《TeleChat模型社区许可协议》。TeleChat模型支持商业用途,如果您计划将 TeleChat 模型或其衍生品用于商业目的,您需要通过以下联系邮箱 tele_ai@URL,提交《TeleChat模型社区许可协议》要求的申请材料。审核通过后,将特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销的商用版权许可。", "### 引用\n如需引用我们的工作,请使用如下 reference:" ]
[ "TAGS\n#license-apache-2.0 #arxiv-2401.03804 #region-us \n", "# 数据介绍\nTeleChat-PTD 是由电信星辰大模型TeleChat预训练语料中抽取出的的综合性大规模中文数据集。数据主要来源于网页、书籍、官方媒体等。 我们使用规则+模型的方式进行了相关的过滤,并对数据进行了相似性去重,尽可能地提取出高质量地数据。\n\nTeleChat-PTD 数据集大约公开了2.7亿条数据,数据由纯中文文本构成,原始大小约1TB,压缩后480G,共189个文件。数据集中已经去除了其它冗余信息。", "# 数据下载\n\nhuggingface下载地址:TODO\n\n天翼云盘下载地址:TODO", "# 数据格式\n数据为jsonl格式,仅有一个字段data: 单条处理后的预训练数据", "# 数据清洗\n数据清洗的工作流程主要是:规则筛选和清洗、去重、高质量数据筛选、数据安全处理这四个步骤。\n\n- 规则筛选主要是一些通用的规则和启发式规则,例如对字数长度的筛选等等。\n- 去重主要使用相似度去重来将过于相似重复的数据删除\n- 高质量筛选主要使用了BERT、GPT2等模型对数据进行打分筛选出高质量数据\n- 数据清洗主要是针对不良数据进行了识别和去除。", "# 声明、协议、引用", "### 声明\n我们在此声明,不要使用TeleChat模型及其衍生模型进行任何危害国家社会安全或违法的活动。同时,我们也要求使用者不要将TeleChat模型用于没有安全审查和备案的互联网服务。我们希望所有使用者遵守上述原则,确保科技发展在合法合规的环境下进行。\n\n我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用TeleChat开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。", "### 协议\n社区使用 TeleChat 模型需要遵循《TeleChat模型社区许可协议》。TeleChat模型支持商业用途,如果您计划将 TeleChat 模型或其衍生品用于商业目的,您需要通过以下联系邮箱 tele_ai@URL,提交《TeleChat模型社区许可协议》要求的申请材料。审核通过后,将特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销的商用版权许可。", "### 引用\n如需引用我们的工作,请使用如下 reference:" ]
[ 23, 124, 22, 26, 119, 7, 167, 103, 16 ]
[ "passage: TAGS\n#license-apache-2.0 #arxiv-2401.03804 #region-us \n# 数据介绍\nTeleChat-PTD 是由电信星辰大模型TeleChat预训练语料中抽取出的的综合性大规模中文数据集。数据主要来源于网页、书籍、官方媒体等。 我们使用规则+模型的方式进行了相关的过滤,并对数据进行了相似性去重,尽可能地提取出高质量地数据。\n\nTeleChat-PTD 数据集大约公开了2.7亿条数据,数据由纯中文文本构成,原始大小约1TB,压缩后480G,共189个文件。数据集中已经去除了其它冗余信息。# 数据下载\n\nhuggingface下载地址:TODO\n\n天翼云盘下载地址:TODO# 数据格式\n数据为jsonl格式,仅有一个字段data: 单条处理后的预训练数据# 数据清洗\n数据清洗的工作流程主要是:规则筛选和清洗、去重、高质量数据筛选、数据安全处理这四个步骤。\n\n- 规则筛选主要是一些通用的规则和启发式规则,例如对字数长度的筛选等等。\n- 去重主要使用相似度去重来将过于相似重复的数据删除\n- 高质量筛选主要使用了BERT、GPT2等模型对数据进行打分筛选出高质量数据\n- 数据清洗主要是针对不良数据进行了识别和去除。# 声明、协议、引用### 声明\n我们在此声明,不要使用TeleChat模型及其衍生模型进行任何危害国家社会安全或违法的活动。同时,我们也要求使用者不要将TeleChat模型用于没有安全审查和备案的互联网服务。我们希望所有使用者遵守上述原则,确保科技发展在合法合规的环境下进行。\n\n我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用TeleChat开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。" ]
5db86c336bf8b92420287b275415cfb565c2937a
# Answer Equivalence Dataset This dataset is introduced and described in [Tomayto, Tomahto. Beyond Token-level Answer Equivalence for Question Answering Evaluation](http://arxiv.org/abs/2202.07654). ## Source This is a repost. The original dataset repository [can be found here.](https://github.com/google-research-datasets/answer-equivalence-dataset/tree/main) ## Data splits and sizes | AE Split | # AE Examples | # Ratings | |-----------|---------------|-----------| | Train | 9,090 | 9,090 | | Dev | 2,734 | 4,446 | | Test | 5,831 | 9,724 | | Total | 17,655 | 23,260 | | Split by system | # AE Examples | # Ratings | |------------------|---------------|-----------| | BiDAF dev predictions | 5622 | 7522 | | XLNet dev predictions | 2448 | 7932 | | Luke dev predictions | 2240 | 4590 | | Total | 8,565 | 14,170 | ## BERT Matching (BEM) model The BEM model from the paper, finetuned on this dataset, is available on [tfhub](https://tfhub.dev/google/answer_equivalence/bem/1). This [colab](https://colab.research.google.com/github/google-research-datasets/answer-equivalence-dataset/blob/main/Answer_Equivalence_BEM_example.ipynb) demonstrates how to use it. ## How to cite AE? ``` @article{bulian-etal-2022-tomayto, author = {Jannis Bulian and Christian Buck and Wojciech Gajewski and Benjamin B{\"o}rschinger and Tal Schuster}, title = {Tomayto, Tomahto. Beyond Token-level Answer Equivalence for Question Answering Evaluation}, journal = {CoRR}, volume = {abs/2202.07654}, year = {2022}, ee = {http://arxiv.org/abs/2202.07654}, } ``` ## Disclaimer This is not an official Google product. ## Contact information For help or issues, please submit [a GitHub issue to this repostory](https://github.com/google-research-datasets/answer-equivalence-dataset/tree/main) or contact the authors by email.
kortukov/answer-equivalence-dataset
[ "task_categories:text-classification", "size_categories:1K<n<10K", "license:apache-2.0", "arxiv:2202.07654", "region:us" ]
2024-01-08T11:21:27+00:00
{"license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train.jsonl.zip"}, {"split": "test", "path": "ae_test.jsonl.zip"}, {"split": "dev", "path": "ae_dev.jsonl.zip"}, {"split": "dev_bidaf", "path": "dev_bidaf.jsonl.zip"}, {"split": "dev_xlnet", "path": "dev_xlnet.jsonl.zip"}, {"split": "dev_luke", "path": "dev_luke.jsonl.zip"}]}]}
2024-01-08T11:50:51+00:00
[ "2202.07654" ]
[]
TAGS #task_categories-text-classification #size_categories-1K<n<10K #license-apache-2.0 #arxiv-2202.07654 #region-us
Answer Equivalence Dataset ========================== This dataset is introduced and described in Tomayto, Tomahto. Beyond Token-level Answer Equivalence for Question Answering Evaluation. Source ------ This is a repost. The original dataset repository can be found here. Data splits and sizes --------------------- AE Split: Train, # AE Examples: 9,090, # Ratings: 9,090 AE Split: Dev, # AE Examples: 2,734, # Ratings: 4,446 AE Split: Test, # AE Examples: 5,831, # Ratings: 9,724 AE Split: Total, # AE Examples: 17,655, # Ratings: 23,260 Split by system: BiDAF dev predictions, # AE Examples: 5622, # Ratings: 7522 Split by system: XLNet dev predictions, # AE Examples: 2448, # Ratings: 7932 Split by system: Luke dev predictions, # AE Examples: 2240, # Ratings: 4590 Split by system: Total, # AE Examples: 8,565, # Ratings: 14,170 BERT Matching (BEM) model ------------------------- The BEM model from the paper, finetuned on this dataset, is available on tfhub. This colab demonstrates how to use it. How to cite AE? --------------- Disclaimer ---------- This is not an official Google product. Contact information ------------------- For help or issues, please submit a GitHub issue to this repostory or contact the authors by email.
[ "# AE Examples: 9,090, # Ratings: 9,090\nAE Split: Dev, # AE Examples: 2,734, # Ratings: 4,446\nAE Split: Test, # AE Examples: 5,831, # Ratings: 9,724\nAE Split: Total, # AE Examples: 17,655, # Ratings: 23,260\n\n\nSplit by system: BiDAF dev predictions, # AE Examples: 5622, # Ratings: 7522\nSplit by system: XLNet dev predictions, # AE Examples: 2448, # Ratings: 7932\nSplit by system: Luke dev predictions, # AE Examples: 2240, # Ratings: 4590\nSplit by system: Total, # AE Examples: 8,565, # Ratings: 14,170\n\n\nBERT Matching (BEM) model\n-------------------------\n\n\nThe BEM model from the paper, finetuned on this dataset, is available on tfhub.\n\n\nThis colab demonstrates how to use it.\n\n\nHow to cite AE?\n---------------\n\n\nDisclaimer\n----------\n\n\nThis is not an official Google product.\n\n\nContact information\n-------------------\n\n\nFor help or issues, please submit a GitHub issue to this repostory or contact the authors by email." ]
[ "TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #license-apache-2.0 #arxiv-2202.07654 #region-us \n", "# AE Examples: 9,090, # Ratings: 9,090\nAE Split: Dev, # AE Examples: 2,734, # Ratings: 4,446\nAE Split: Test, # AE Examples: 5,831, # Ratings: 9,724\nAE Split: Total, # AE Examples: 17,655, # Ratings: 23,260\n\n\nSplit by system: BiDAF dev predictions, # AE Examples: 5622, # Ratings: 7522\nSplit by system: XLNet dev predictions, # AE Examples: 2448, # Ratings: 7932\nSplit by system: Luke dev predictions, # AE Examples: 2240, # Ratings: 4590\nSplit by system: Total, # AE Examples: 8,565, # Ratings: 14,170\n\n\nBERT Matching (BEM) model\n-------------------------\n\n\nThe BEM model from the paper, finetuned on this dataset, is available on tfhub.\n\n\nThis colab demonstrates how to use it.\n\n\nHow to cite AE?\n---------------\n\n\nDisclaimer\n----------\n\n\nThis is not an official Google product.\n\n\nContact information\n-------------------\n\n\nFor help or issues, please submit a GitHub issue to this repostory or contact the authors by email." ]
[ 46, 262 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #license-apache-2.0 #arxiv-2202.07654 #region-us \n# AE Examples: 9,090, # Ratings: 9,090\nAE Split: Dev, # AE Examples: 2,734, # Ratings: 4,446\nAE Split: Test, # AE Examples: 5,831, # Ratings: 9,724\nAE Split: Total, # AE Examples: 17,655, # Ratings: 23,260\n\n\nSplit by system: BiDAF dev predictions, # AE Examples: 5622, # Ratings: 7522\nSplit by system: XLNet dev predictions, # AE Examples: 2448, # Ratings: 7932\nSplit by system: Luke dev predictions, # AE Examples: 2240, # Ratings: 4590\nSplit by system: Total, # AE Examples: 8,565, # Ratings: 14,170\n\n\nBERT Matching (BEM) model\n-------------------------\n\n\nThe BEM model from the paper, finetuned on this dataset, is available on tfhub.\n\n\nThis colab demonstrates how to use it.\n\n\nHow to cite AE?\n---------------\n\n\nDisclaimer\n----------\n\n\nThis is not an official Google product.\n\n\nContact information\n-------------------\n\n\nFor help or issues, please submit a GitHub issue to this repostory or contact the authors by email." ]
3ace8456cddfcbc8c43c9a6e5ff3c381e1c2f368
- original dataset: [heegyu/glaive-function-calling-v2-formatted](https://huggingface.co/datasets/heegyu/glaive-function-calling-v2-formatted) --- Датасет heegyu/glaive-function-calling-v2-formatted переведенный с помощбю seamlessMT4
VLegio/ru_glaive-function-calling-v2-formatted
[ "language:ru", "func_call", "machine_translate", "region:us" ]
2024-01-08T11:39:45+00:00
{"language": ["ru"], "pretty_name": "Russian machine translated funcction calling dataset", "tags": ["func_call", "machine_translate"]}
2024-01-08T13:09:56+00:00
[]
[ "ru" ]
TAGS #language-Russian #func_call #machine_translate #region-us
- original dataset: heegyu/glaive-function-calling-v2-formatted --- Датасет heegyu/glaive-function-calling-v2-formatted переведенный с помощбю seamlessMT4
[]
[ "TAGS\n#language-Russian #func_call #machine_translate #region-us \n" ]
[ 21 ]
[ "passage: TAGS\n#language-Russian #func_call #machine_translate #region-us \n" ]
9e8a688b72fbc24220403bcebdbbfc72e0d7e033
# LLM Alignment Dataset ## Table of Contents - [Overview](#overview) - [Dataset Description](#dataset-description) - [Methodology](#methodology) - [Usage](#usage) ## Overview This dataset is the result of an exercise conducted during the Hugging Face LLM Alignment Internship. It involves the use of rejection sampling to select high-quality responses from a language model for a set of prompts. The exercise aims to explore model alignment and response quality improvement techniques. ## Dataset Description ### Included Models - **Phi-2 Language Model:** A 2.7 billion parameter model trained on synthetic and web data. [link](https://huggingface.co/microsoft/phi-2) - **OpenAssistant's Reward Model:** A fine-tuned DeBERTa V3 variant trained to score text quality. [link](OpenAssistant/reward-model-deberta-v3-large-v2) ### Data Collection The dataset comprises responses to 10 prompts, focusing on the model's instruction-following ability in various domains. ### Dataset Structure - `prompt`: The original instruction or question posed to the model. - `response`: The best response out of 8 candidate response - `score`: Corresponding score for the response, evaluated by the OpenAssistant reward model. ## Methodology ### Prompt Generation Prompts were crafted to test the model's ability to follow instructions across diverse domains. The prompt generation was done via chatGPT ### Response Generation Each prompt was fed into the Phi-2 model to generate 8 responses. ### Scoring Responses were scored using the OpenAssistant reward model to identify the best response per prompt. ## Usage This dataset is intended for research and educational purposes, particularly in exploring techniques for improving language model alignment and response quality.
baptistecolle/alignment_dataset
[ "language:en", "license:apache-2.0", "region:us" ]
2024-01-08T11:48:27+00:00
{"language": ["en"], "license": "apache-2.0", "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 5911, "num_examples": 10}], "download_size": 9991, "dataset_size": 5911}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-08T11:57:54+00:00
[]
[ "en" ]
TAGS #language-English #license-apache-2.0 #region-us
# LLM Alignment Dataset ## Table of Contents - Overview - Dataset Description - Methodology - Usage ## Overview This dataset is the result of an exercise conducted during the Hugging Face LLM Alignment Internship. It involves the use of rejection sampling to select high-quality responses from a language model for a set of prompts. The exercise aims to explore model alignment and response quality improvement techniques. ## Dataset Description ### Included Models - Phi-2 Language Model: A 2.7 billion parameter model trained on synthetic and web data. link - OpenAssistant's Reward Model: A fine-tuned DeBERTa V3 variant trained to score text quality. link ### Data Collection The dataset comprises responses to 10 prompts, focusing on the model's instruction-following ability in various domains. ### Dataset Structure - 'prompt': The original instruction or question posed to the model. - 'response': The best response out of 8 candidate response - 'score': Corresponding score for the response, evaluated by the OpenAssistant reward model. ## Methodology ### Prompt Generation Prompts were crafted to test the model's ability to follow instructions across diverse domains. The prompt generation was done via chatGPT ### Response Generation Each prompt was fed into the Phi-2 model to generate 8 responses. ### Scoring Responses were scored using the OpenAssistant reward model to identify the best response per prompt. ## Usage This dataset is intended for research and educational purposes, particularly in exploring techniques for improving language model alignment and response quality.
[ "# LLM Alignment Dataset", "## Table of Contents\n- Overview\n- Dataset Description\n- Methodology\n- Usage", "## Overview\nThis dataset is the result of an exercise conducted during the Hugging Face LLM Alignment Internship. It involves the use of rejection sampling to select high-quality responses from a language model for a set of prompts. The exercise aims to explore model alignment and response quality improvement techniques.", "## Dataset Description", "### Included Models\n- Phi-2 Language Model: A 2.7 billion parameter model trained on synthetic and web data. link\n- OpenAssistant's Reward Model: A fine-tuned DeBERTa V3 variant trained to score text quality. link", "### Data Collection\nThe dataset comprises responses to 10 prompts, focusing on the model's instruction-following ability in various domains.", "### Dataset Structure\n- 'prompt': The original instruction or question posed to the model.\n- 'response': The best response out of 8 candidate response\n- 'score': Corresponding score for the response, evaluated by the OpenAssistant reward model.", "## Methodology", "### Prompt Generation\nPrompts were crafted to test the model's ability to follow instructions across diverse domains. The prompt generation was done via chatGPT", "### Response Generation\nEach prompt was fed into the Phi-2 model to generate 8 responses.", "### Scoring\nResponses were scored using the OpenAssistant reward model to identify the best response per prompt.", "## Usage\nThis dataset is intended for research and educational purposes, particularly in exploring techniques for improving language model alignment and response quality." ]
[ "TAGS\n#language-English #license-apache-2.0 #region-us \n", "# LLM Alignment Dataset", "## Table of Contents\n- Overview\n- Dataset Description\n- Methodology\n- Usage", "## Overview\nThis dataset is the result of an exercise conducted during the Hugging Face LLM Alignment Internship. It involves the use of rejection sampling to select high-quality responses from a language model for a set of prompts. The exercise aims to explore model alignment and response quality improvement techniques.", "## Dataset Description", "### Included Models\n- Phi-2 Language Model: A 2.7 billion parameter model trained on synthetic and web data. link\n- OpenAssistant's Reward Model: A fine-tuned DeBERTa V3 variant trained to score text quality. link", "### Data Collection\nThe dataset comprises responses to 10 prompts, focusing on the model's instruction-following ability in various domains.", "### Dataset Structure\n- 'prompt': The original instruction or question posed to the model.\n- 'response': The best response out of 8 candidate response\n- 'score': Corresponding score for the response, evaluated by the OpenAssistant reward model.", "## Methodology", "### Prompt Generation\nPrompts were crafted to test the model's ability to follow instructions across diverse domains. The prompt generation was done via chatGPT", "### Response Generation\nEach prompt was fed into the Phi-2 model to generate 8 responses.", "### Scoring\nResponses were scored using the OpenAssistant reward model to identify the best response per prompt.", "## Usage\nThis dataset is intended for research and educational purposes, particularly in exploring techniques for improving language model alignment and response quality." ]
[ 18, 9, 18, 74, 4, 60, 34, 65, 3, 36, 19, 25, 31 ]
[ "passage: TAGS\n#language-English #license-apache-2.0 #region-us \n# LLM Alignment Dataset## Table of Contents\n- Overview\n- Dataset Description\n- Methodology\n- Usage## Overview\nThis dataset is the result of an exercise conducted during the Hugging Face LLM Alignment Internship. It involves the use of rejection sampling to select high-quality responses from a language model for a set of prompts. The exercise aims to explore model alignment and response quality improvement techniques.## Dataset Description### Included Models\n- Phi-2 Language Model: A 2.7 billion parameter model trained on synthetic and web data. link\n- OpenAssistant's Reward Model: A fine-tuned DeBERTa V3 variant trained to score text quality. link### Data Collection\nThe dataset comprises responses to 10 prompts, focusing on the model's instruction-following ability in various domains.### Dataset Structure\n- 'prompt': The original instruction or question posed to the model.\n- 'response': The best response out of 8 candidate response\n- 'score': Corresponding score for the response, evaluated by the OpenAssistant reward model.## Methodology### Prompt Generation\nPrompts were crafted to test the model's ability to follow instructions across diverse domains. The prompt generation was done via chatGPT### Response Generation\nEach prompt was fed into the Phi-2 model to generate 8 responses.### Scoring\nResponses were scored using the OpenAssistant reward model to identify the best response per prompt.## Usage\nThis dataset is intended for research and educational purposes, particularly in exploring techniques for improving language model alignment and response quality." ]
f474dcd2ad9276dfb48f96670f830da694870447
<img src="fig/icon.png" alt="icon" style="zoom:20%;" /> #### Dataset Summary DebugBench is a Large Language Model (LLM) debugging benchmark introduced in the paper [DebugBench: Evaluating Debugging Capability of Large Language Models](https://arxiv.org/abs/2401.04621). We collect code snippets from the [LeetCode](https://leetcode.com) community and implant bugs into source data with [GPT-4](https://openai.com/research/gpt-4). The project is also open-sourced as a [GitHub repository](https://github.com/thunlp/DebugBench).<br> - It consists of 4,253 instances. - It covers four major bug categories and 18 minor types. - It includes C++, Java, and Python instances. - It contains three difficulty levels: easy, medium, and hard. - All the instances were released after June 2022. - Please refer to the [article](https://arxiv.org/abs/2401.04621) for more details. #### Data Fields An instance in DebugBench contains 13 features. | Feature | Description | Example | | :------------------: | ------------------------------------------------------------ | ------------------------------------------------------------ | | slug | The id of the leetcode programming problem. | single-number | | Category | The category of bug taxonomy. | logic error | | subtype | The subtype of bug taxonomy under the big category. | operation error | | language | The programming language of the instance. | cpp | | level | The level of complexity of the problem from "easy", to "medium" and "hard". | easy | | release_time | Release time of corresponding programming problem in the format of Unix timestamp. | 1,691,549,090 | | question | The text description for the programming problem. | Given a non-empty array of integers nums, every element appears twice except for one. Find that single one. You must implement a solution with a linear runtime complexity and use only constant extra space. | | examples | Some examples of input-output pairs for the targeted function. | [ "Input: nums = [2,2,1]\nOutput: 1", "Input: nums = [4,1,2,1,2]\nOutput: 4", "Input: nums = [1]\nOutput: 1" ] | | constraints | The constraints of input parameters. | 1 <= nums.length <= 3 * 104 -3 * 104 <= nums[i] <= 3 * 104 Each element in the array appears twice except for one element which appears only once. | | solution | Ground-truth solutions that pass all the test suites for the programming problems. | class Solution { public: int singleNumber(vector<int>& nums) { unordered_map<int,int> mp; for(int i=0;i<nums.size();i++){ mp[nums[i]]++; } for(auto m:mp){ if(m.second==1){ return m.first; } } return -1; } }; | | solution explanation | The original posts that share the solution. | \# Using Map\n```\nclass Solution {\npublic:\n int singleNumber(vector<int>& nums) {\n unordered_map<int,int> mp;\n for(int i=0;i<nums.size();i++){\n mp[nums[i]]++;\n }\n for(auto m:mp){\n if(m.second==1){\n return m.first;\n }\n }\n return -1;\n }\n};\n```\n# Using XOR\n```\nclass Solution {\npublic:\n int singleNumber(vector<int>& nums) {\n int res=0;\n for(int i=0;i<nums.size();i++){\n res^=nums[i];\n }\n return res;\n }\n};\n``` | | buggy_code | The buggy version of the solution waiting to be debugged. | class Solution { public: int singleNumber(vector<int>& nums) { unordered_map<int,int> mp; for(int i=0;i<nums.size();i++){ mp[nums[i]] = 1; } for(auto m:mp){ if(m.second==1){ return m.first; } } return -1; } }; | | bug_explanation | Explanation about the implanted bug. | Instead of incrementing the map's value, we are setting it to 1 each time. | #### Data Splits The dataset is an evaluation benchmark and there comprises only one split, the eval split of 4, 253. #### Evaluation The evaluation is based on the unseen test suites from [LeetCode](https://leetcode.com), a popular programming challenge platform. The evaluator will need a leetcode account for the usage of test suites. To obtain the 'leetcode_session' cookie, they may utilize the developer view in their web browsers like [Chrome](https://www.google.com/chrome/) or use browser extensions like [EditThisCookie](https://chromewebstore.google.com/detail/fngmhnnpilhplaeedifhccceomclgfbg). Please refer to the following code example for evaluation. This example is accessible in this Hugging Face repository. The leetcode_env package comes from [Leetcode Hard Gym](https://github.com/GammaTauAI/leetcode-hard-gym). More implementation details are available in [our GitHub repository](https://github.com/thunlp/DebugBench). ```python import os from .leetcode_env.environment import LeetCodeEnv from .leetcode_env.types import LeetCodeSubmission, ProgrammingLanguage LEETCODE_SESSION_COOKIE = os.environ['LEETCODE_SESSION'] class LeetCodeTester(object): def __init__(self): os.environ['LEETCODE_SESSION'] = LEETCODE_SESSION_COOKIE self.env = LeetCodeEnv(cooldown=15) self.lang_dict = { "python3": ProgrammingLanguage.PYTHON3, "java": ProgrammingLanguage.JAVA, "cpp": ProgrammingLanguage.CPP, } def test(self, code: str, task_id: str, language: str) -> tuple[bool, dict]: lang = self.lang_dict.get(language) sub = LeetCodeSubmission(code=code, lang=lang, question_slug=task_id) status, reward, done, submission_result = self.env.step(sub) return reward, submission_result ``` ```python if __name__ == '__main__': tester = LeetCodeTester() task_id = "make-number-of-distinct-characters-equal" code = "class Solution:\n\n def insertAndRemove(self, mp, toInsert..." # abbreviated print(tester.test(code, task_id, "python3")) ``` Here are two output examples. ```python (True, {'status_code': 10, 'lang': 'python3', 'run_success': True, 'status_runtime': '111 ms', 'memory': 18488000, 'question_id': '2615', 'elapsed_time': 133, 'compare_result': '111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111', 'code_output': '', 'std_output': '', 'last_testcase': '', 'expected_output': '', 'task_finish_time': 1704769340887, 'task_name': 'judger.judgetask.Judge', 'finished': True, 'total_correct': 99, 'total_testcases': 99, 'runtime_percentile': 73.75, 'status_memory': '18.5 MB', 'memory_percentile': 15.625, 'pretty_lang': 'Python3', 'submission_id': '1141026534', 'status_msg': 'Accepted', 'state': 'SUCCESS'}) (False, {'status_code': 11, 'lang': 'python3', 'run_success': True, 'status_runtime': 'N/A', 'memory': 18532000, 'question_id': '2615', 'elapsed_time': 184, 'compare_result': '101110111101010010111100110101111111011010100001111101011111000111010111000111101011011011101110011', 'code_output': 'false', 'std_output': '', 'last_testcase': '"abcc"\n"aab"', 'expected_output': 'true', 'task_finish_time': 1704769355341, 'task_name': 'judger.judgetask.Judge', 'finished': True, 'total_correct': 64, 'total_testcases': 99, 'runtime_percentile': None, 'status_memory': 'N/A', 'memory_percentile': None, 'pretty_lang': 'Python3', 'submission_id': '1141026664', 'input_formatted': '"abcc", "aab"', 'input': '"abcc"\n"aab"', 'status_msg': 'Wrong Answer', 'state': 'SUCCESS'}) ``` #### Dataset Creation ![construct](fig/construct.png) As illustrated in the figure above, to construct DebugBench, we collect code snippets from the LeetCode community, implant bugs into source data with GPT-4, and assure quality checks. We also evaluate two commercial and three open-source models in a zero-shot scenario. Please refer to the [article](https://arxiv.org/abs/2401.04621) for more details. #### Limitation - Bug instances in our experiments are synthetically created and might not entirely reflect the intricacies of real-world debugging scenarios. - For a few bug subtypes, some bug instances may have an inconsistent taxonomy nature than the labeled feature. An example of this is a bug generated in response to a prompt specifying 'undefined methods.' Rather than invoking undefined functions as expected, the code triggers an infinite loop. #### Citation Information ```latex @misc{tian2024debugbench, title={DebugBench: Evaluating Debugging Capability of Large Language Models}, author={Runchu Tian and Yining Ye and Yujia Qin and Xin Cong and Yankai Lin and Zhiyuan Liu and Maosong Sun}, year={2024}, eprint={2401.04621}, archivePrefix={arXiv}, primaryClass={cs.SE} } ```
Rtian/DebugBench
[ "task_categories:text-generation", "task_categories:question-answering", "task_categories:conversational", "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "code", "arxiv:2401.04621", "region:us" ]
2024-01-08T12:28:30+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation", "question-answering", "conversational"], "pretty_name": "DebugBench", "tags": ["code"]}
2024-01-11T15:28:41+00:00
[ "2401.04621" ]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-question-answering #task_categories-conversational #size_categories-1K<n<10K #language-English #license-apache-2.0 #code #arxiv-2401.04621 #region-us
![icon](fig/URL) #### Dataset Summary DebugBench is a Large Language Model (LLM) debugging benchmark introduced in the paper DebugBench: Evaluating Debugging Capability of Large Language Models. We collect code snippets from the LeetCode community and implant bugs into source data with GPT-4. The project is also open-sourced as a GitHub repository. * It consists of 4,253 instances. * It covers four major bug categories and 18 minor types. * It includes C++, Java, and Python instances. * It contains three difficulty levels: easy, medium, and hard. * All the instances were released after June 2022. * Please refer to the article for more details. #### Data Fields An instance in DebugBench contains 13 features. #### Data Splits The dataset is an evaluation benchmark and there comprises only one split, the eval split of 4, 253. #### Evaluation The evaluation is based on the unseen test suites from LeetCode, a popular programming challenge platform. The evaluator will need a leetcode account for the usage of test suites. To obtain the 'leetcode\_session' cookie, they may utilize the developer view in their web browsers like Chrome or use browser extensions like EditThisCookie. Please refer to the following code example for evaluation. This example is accessible in this Hugging Face repository. The leetcode\_env package comes from Leetcode Hard Gym. More implementation details are available in our GitHub repository. Here are two output examples. #### Dataset Creation !construct As illustrated in the figure above, to construct DebugBench, we collect code snippets from the LeetCode community, implant bugs into source data with GPT-4, and assure quality checks. We also evaluate two commercial and three open-source models in a zero-shot scenario. Please refer to the article for more details. #### Limitation * Bug instances in our experiments are synthetically created and might not entirely reflect the intricacies of real-world debugging scenarios. * For a few bug subtypes, some bug instances may have an inconsistent taxonomy nature than the labeled feature. An example of this is a bug generated in response to a prompt specifying 'undefined methods.' Rather than invoking undefined functions as expected, the code triggers an infinite loop.
[ "#### Dataset Summary\n\n\nDebugBench is a Large Language Model (LLM) debugging benchmark introduced in the paper DebugBench: Evaluating Debugging Capability of Large Language Models. We collect code snippets from the LeetCode community and implant bugs into source data with GPT-4. The project is also open-sourced as a GitHub repository. \n\n\n\n* It consists of 4,253 instances.\n* It covers four major bug categories and 18 minor types.\n* It includes C++, Java, and Python instances.\n* It contains three difficulty levels: easy, medium, and hard.\n* All the instances were released after June 2022.\n* Please refer to the article for more details.", "#### Data Fields\n\n\nAn instance in DebugBench contains 13 features.", "#### Data Splits\n\n\nThe dataset is an evaluation benchmark and there comprises only one split, the eval split of 4, 253.", "#### Evaluation\n\n\nThe evaluation is based on the unseen test suites from LeetCode, a popular programming challenge platform. The evaluator will need a leetcode account for the usage of test suites. To obtain the 'leetcode\\_session' cookie, they may utilize the developer view in their web browsers like Chrome or use browser extensions like EditThisCookie.\n\n\nPlease refer to the following code example for evaluation. This example is accessible in this Hugging Face repository. The leetcode\\_env package comes from Leetcode Hard Gym. More implementation details are available in our GitHub repository.\n\n\nHere are two output examples.", "#### Dataset Creation\n\n\n!construct\n\n\nAs illustrated in the figure above, to construct DebugBench, we collect code snippets from the LeetCode community, implant bugs into source data with GPT-4, and assure quality checks. We also evaluate two commercial and three open-source models in a zero-shot scenario. Please refer to the article for more details.", "#### Limitation\n\n\n* Bug instances in our experiments are synthetically created and might not entirely reflect the intricacies of real-world debugging scenarios.\n* For a few bug subtypes, some bug instances may have an inconsistent taxonomy nature than the labeled feature. An example of this is a bug generated in response to a prompt specifying 'undefined methods.' Rather than invoking undefined functions as expected, the code triggers an infinite loop." ]
[ "TAGS\n#task_categories-text-generation #task_categories-question-answering #task_categories-conversational #size_categories-1K<n<10K #language-English #license-apache-2.0 #code #arxiv-2401.04621 #region-us \n", "#### Dataset Summary\n\n\nDebugBench is a Large Language Model (LLM) debugging benchmark introduced in the paper DebugBench: Evaluating Debugging Capability of Large Language Models. We collect code snippets from the LeetCode community and implant bugs into source data with GPT-4. The project is also open-sourced as a GitHub repository. \n\n\n\n* It consists of 4,253 instances.\n* It covers four major bug categories and 18 minor types.\n* It includes C++, Java, and Python instances.\n* It contains three difficulty levels: easy, medium, and hard.\n* All the instances were released after June 2022.\n* Please refer to the article for more details.", "#### Data Fields\n\n\nAn instance in DebugBench contains 13 features.", "#### Data Splits\n\n\nThe dataset is an evaluation benchmark and there comprises only one split, the eval split of 4, 253.", "#### Evaluation\n\n\nThe evaluation is based on the unseen test suites from LeetCode, a popular programming challenge platform. The evaluator will need a leetcode account for the usage of test suites. To obtain the 'leetcode\\_session' cookie, they may utilize the developer view in their web browsers like Chrome or use browser extensions like EditThisCookie.\n\n\nPlease refer to the following code example for evaluation. This example is accessible in this Hugging Face repository. The leetcode\\_env package comes from Leetcode Hard Gym. More implementation details are available in our GitHub repository.\n\n\nHere are two output examples.", "#### Dataset Creation\n\n\n!construct\n\n\nAs illustrated in the figure above, to construct DebugBench, we collect code snippets from the LeetCode community, implant bugs into source data with GPT-4, and assure quality checks. We also evaluate two commercial and three open-source models in a zero-shot scenario. Please refer to the article for more details.", "#### Limitation\n\n\n* Bug instances in our experiments are synthetically created and might not entirely reflect the intricacies of real-world debugging scenarios.\n* For a few bug subtypes, some bug instances may have an inconsistent taxonomy nature than the labeled feature. An example of this is a bug generated in response to a prompt specifying 'undefined methods.' Rather than invoking undefined functions as expected, the code triggers an infinite loop." ]
[ 74, 164, 17, 30, 142, 83, 112 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-question-answering #task_categories-conversational #size_categories-1K<n<10K #language-English #license-apache-2.0 #code #arxiv-2401.04621 #region-us \n#### Dataset Summary\n\n\nDebugBench is a Large Language Model (LLM) debugging benchmark introduced in the paper DebugBench: Evaluating Debugging Capability of Large Language Models. We collect code snippets from the LeetCode community and implant bugs into source data with GPT-4. The project is also open-sourced as a GitHub repository. \n\n\n\n* It consists of 4,253 instances.\n* It covers four major bug categories and 18 minor types.\n* It includes C++, Java, and Python instances.\n* It contains three difficulty levels: easy, medium, and hard.\n* All the instances were released after June 2022.\n* Please refer to the article for more details.#### Data Fields\n\n\nAn instance in DebugBench contains 13 features.#### Data Splits\n\n\nThe dataset is an evaluation benchmark and there comprises only one split, the eval split of 4, 253.#### Evaluation\n\n\nThe evaluation is based on the unseen test suites from LeetCode, a popular programming challenge platform. The evaluator will need a leetcode account for the usage of test suites. To obtain the 'leetcode\\_session' cookie, they may utilize the developer view in their web browsers like Chrome or use browser extensions like EditThisCookie.\n\n\nPlease refer to the following code example for evaluation. This example is accessible in this Hugging Face repository. The leetcode\\_env package comes from Leetcode Hard Gym. More implementation details are available in our GitHub repository.\n\n\nHere are two output examples." ]
711a10d6a4c33edd5ec868a122eb4c32abe3fea7
# Dataset Card for "news-headline-gen" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nutorbit/news-headline-gen
[ "region:us" ]
2024-01-08T12:37:47+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "dev", "path": "data/dev-*"}]}], "dataset_info": {"features": [{"name": "headline", "dtype": "string"}, {"name": "news", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23555772, "num_examples": 21157}, {"name": "dev", "num_bytes": 2628111, "num_examples": 2365}], "download_size": 17404158, "dataset_size": 26183883}}
2024-01-08T12:47:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "news-headline-gen" More Information needed
[ "# Dataset Card for \"news-headline-gen\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"news-headline-gen\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"news-headline-gen\"\n\nMore Information needed" ]
f63461d2bf47fcfbdaa5454ae8582cffa86ea80b
# Dataset Card for "scandi-langid" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kardosdrur/scandi-langid
[ "region:us" ]
2024-01-08T14:21:36+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28278216.0, "num_examples": 240000}, {"name": "test", "num_bytes": 7069554.0, "num_examples": 60000}], "download_size": 23042893, "dataset_size": 35347770.0}}
2024-01-08T14:21:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "scandi-langid" More Information needed
[ "# Dataset Card for \"scandi-langid\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"scandi-langid\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"scandi-langid\"\n\nMore Information needed" ]
93cc80959b796cab456a8e638fe25753348ea55d
# AI Dictionary Dataset Welcome to the AI Dictionary dataset on HuggingFace. This dataset is a comprehensive tool comprised of 16,665 unique key phrases that describe the whole domain of Artificial Intelligence (AI). It serves both the research community and industry domains, aiding in the identification of radical innovations and uncovering applications of AI in new domains. This dataset is the result of the research paper "The AI Dictionary: The Foundation for a Text-Based Tool to Identify and Measure Technology Innovation". The paper explores the rapidly evolving landscape of AI as a General Purpose Technology and its dual role in driving and sustaining innovation across various domains. The AI Dictionary is designed to measure technological innovation using text-based methods. We hope to establish a foundational methodology for a new and innovation measurement tool. ## Related Repository The process of creating and validating the AI Dictionary is detailed in a series of Jupyter notebooks and Python scripts in the related GitHub repository. You can access the repository at the following link: [AI Dictionary GitHub Repository](https://github.com/J0nasW/AI-Dictionary) Please refer to the repository for a deeper understanding of the methodology and the process behind the creation of the AI Dictionary. ## Usage You can use this dataset for a variety of AI and Machine Learning tasks such as text classification, named entity recognition, and more. The dataset can also be used for research purposes to identify and measure technological innovation in the field of AI. ## License This dataset is released under the MIT License.
J0nasW/AI-Dictionary
[ "task_categories:text-classification", "task_categories:feature-extraction", "size_categories:10K<n<100K", "language:en", "license:mit", "code", "region:us" ]
2024-01-08T15:00:20+00:00
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "feature-extraction"], "tags": ["code"]}
2024-01-08T15:13:31+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-feature-extraction #size_categories-10K<n<100K #language-English #license-mit #code #region-us
# AI Dictionary Dataset Welcome to the AI Dictionary dataset on HuggingFace. This dataset is a comprehensive tool comprised of 16,665 unique key phrases that describe the whole domain of Artificial Intelligence (AI). It serves both the research community and industry domains, aiding in the identification of radical innovations and uncovering applications of AI in new domains. This dataset is the result of the research paper "The AI Dictionary: The Foundation for a Text-Based Tool to Identify and Measure Technology Innovation". The paper explores the rapidly evolving landscape of AI as a General Purpose Technology and its dual role in driving and sustaining innovation across various domains. The AI Dictionary is designed to measure technological innovation using text-based methods. We hope to establish a foundational methodology for a new and innovation measurement tool. ## Related Repository The process of creating and validating the AI Dictionary is detailed in a series of Jupyter notebooks and Python scripts in the related GitHub repository. You can access the repository at the following link: AI Dictionary GitHub Repository Please refer to the repository for a deeper understanding of the methodology and the process behind the creation of the AI Dictionary. ## Usage You can use this dataset for a variety of AI and Machine Learning tasks such as text classification, named entity recognition, and more. The dataset can also be used for research purposes to identify and measure technological innovation in the field of AI. ## License This dataset is released under the MIT License.
[ "# AI Dictionary Dataset\n\nWelcome to the AI Dictionary dataset on HuggingFace. This dataset is a comprehensive tool comprised of 16,665 unique key phrases that describe the whole domain of Artificial Intelligence (AI). It serves both the research community and industry domains, aiding in the identification of radical innovations and uncovering applications of AI in new domains.\n\nThis dataset is the result of the research paper \"The AI Dictionary: The Foundation for a Text-Based Tool to Identify and Measure Technology Innovation\". The paper explores the rapidly evolving landscape of AI as a General Purpose Technology and its dual role in driving and sustaining innovation across various domains. \n\nThe AI Dictionary is designed to measure technological innovation using text-based methods. We hope to establish a foundational methodology for a new and innovation measurement tool.", "## Related Repository\n\nThe process of creating and validating the AI Dictionary is detailed in a series of Jupyter notebooks and Python scripts in the related GitHub repository. You can access the repository at the following link:\n\nAI Dictionary GitHub Repository\n\nPlease refer to the repository for a deeper understanding of the methodology and the process behind the creation of the AI Dictionary.", "## Usage\n\nYou can use this dataset for a variety of AI and Machine Learning tasks such as text classification, named entity recognition, and more. The dataset can also be used for research purposes to identify and measure technological innovation in the field of AI.", "## License\n\nThis dataset is released under the MIT License." ]
[ "TAGS\n#task_categories-text-classification #task_categories-feature-extraction #size_categories-10K<n<100K #language-English #license-mit #code #region-us \n", "# AI Dictionary Dataset\n\nWelcome to the AI Dictionary dataset on HuggingFace. This dataset is a comprehensive tool comprised of 16,665 unique key phrases that describe the whole domain of Artificial Intelligence (AI). It serves both the research community and industry domains, aiding in the identification of radical innovations and uncovering applications of AI in new domains.\n\nThis dataset is the result of the research paper \"The AI Dictionary: The Foundation for a Text-Based Tool to Identify and Measure Technology Innovation\". The paper explores the rapidly evolving landscape of AI as a General Purpose Technology and its dual role in driving and sustaining innovation across various domains. \n\nThe AI Dictionary is designed to measure technological innovation using text-based methods. We hope to establish a foundational methodology for a new and innovation measurement tool.", "## Related Repository\n\nThe process of creating and validating the AI Dictionary is detailed in a series of Jupyter notebooks and Python scripts in the related GitHub repository. You can access the repository at the following link:\n\nAI Dictionary GitHub Repository\n\nPlease refer to the repository for a deeper understanding of the methodology and the process behind the creation of the AI Dictionary.", "## Usage\n\nYou can use this dataset for a variety of AI and Machine Learning tasks such as text classification, named entity recognition, and more. The dataset can also be used for research purposes to identify and measure technological innovation in the field of AI.", "## License\n\nThis dataset is released under the MIT License." ]
[ 52, 182, 87, 58, 12 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-feature-extraction #size_categories-10K<n<100K #language-English #license-mit #code #region-us \n# AI Dictionary Dataset\n\nWelcome to the AI Dictionary dataset on HuggingFace. This dataset is a comprehensive tool comprised of 16,665 unique key phrases that describe the whole domain of Artificial Intelligence (AI). It serves both the research community and industry domains, aiding in the identification of radical innovations and uncovering applications of AI in new domains.\n\nThis dataset is the result of the research paper \"The AI Dictionary: The Foundation for a Text-Based Tool to Identify and Measure Technology Innovation\". The paper explores the rapidly evolving landscape of AI as a General Purpose Technology and its dual role in driving and sustaining innovation across various domains. \n\nThe AI Dictionary is designed to measure technological innovation using text-based methods. We hope to establish a foundational methodology for a new and innovation measurement tool.## Related Repository\n\nThe process of creating and validating the AI Dictionary is detailed in a series of Jupyter notebooks and Python scripts in the related GitHub repository. You can access the repository at the following link:\n\nAI Dictionary GitHub Repository\n\nPlease refer to the repository for a deeper understanding of the methodology and the process behind the creation of the AI Dictionary.## Usage\n\nYou can use this dataset for a variety of AI and Machine Learning tasks such as text classification, named entity recognition, and more. The dataset can also be used for research purposes to identify and measure technological innovation in the field of AI.## License\n\nThis dataset is released under the MIT License." ]
a91c66786590154e31620f09baa20c05d0542de9
# Dataset Card for "fashion_image_caption_100_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jbfreb/fashion_image_caption_100_v2
[ "region:us" ]
2024-01-08T15:06:18+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22820471.0, "num_examples": 100}], "download_size": 22820374, "dataset_size": 22820471.0}}
2024-01-08T15:06:20+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fashion_image_caption_100_v2" More Information needed
[ "# Dataset Card for \"fashion_image_caption_100_v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fashion_image_caption_100_v2\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"fashion_image_caption_100_v2\"\n\nMore Information needed" ]
459106ceb985ff5979f9eb79c9cb5d973009a3a6
Source: https://huggingface.co/datasets/ywchoi/pubmed_abstract_3
awinml/pubmed_abstract_3_1k
[ "region:us" ]
2024-01-08T15:50:26+00:00
{"dataset_info": {"features": [{"name": "pmid", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1524794, "num_examples": 1000}], "download_size": 873865, "dataset_size": 1524794}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-08T16:00:06+00:00
[]
[]
TAGS #region-us
Source: URL
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
1380b681fa4488ef3cbf79dbe77e9dc9edb07ebd
Phase 1 of PRM800K, processed for preference learning. Note that PRM800K uses a different test split for MATH than typical, so this dataset includes data from the MATH test set.
euclaise/prm800k_preferences
[ "license:mit", "region:us" ]
2024-01-08T16:21:14+00:00
{"license": "mit", "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "history", "sequence": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 11928250, "num_examples": 5728}], "download_size": 1827317, "dataset_size": 11928250}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-13T20:45:34+00:00
[]
[]
TAGS #license-mit #region-us
Phase 1 of PRM800K, processed for preference learning. Note that PRM800K uses a different test split for MATH than typical, so this dataset includes data from the MATH test set.
[]
[ "TAGS\n#license-mit #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-mit #region-us \n" ]
ee4d22d49b8486786cbc01c0b6f6f6266e56f3bd
# Dataset Card for "proofwriter" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tasksource/proofwriter
[ "region:us" ]
2024-01-08T16:31:01+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "maxD", "dtype": "int64"}, {"name": "NFact", "dtype": "int64"}, {"name": "NRule", "dtype": "int64"}, {"name": "theory", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "QDep", "dtype": "int64"}, {"name": "QLen", "dtype": "float64"}, {"name": "allProofs", "dtype": "string"}, {"name": "config", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 849283641, "num_examples": 585552}, {"name": "test", "num_bytes": 262359174, "num_examples": 174476}, {"name": "validation", "num_bytes": 121834168, "num_examples": 85468}], "download_size": 42944158, "dataset_size": 1233476983}}
2024-01-08T16:33:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for "proofwriter" More Information needed
[ "# Dataset Card for \"proofwriter\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"proofwriter\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"proofwriter\"\n\nMore Information needed" ]
0569b46516118dfe1a8a9c278c6f1670549af443
# Dataset Card for SUMM-RE small Manually corrected transcripts of French conversations, aligned with the audio signal. ## Dataset Details ### Dataset Description The SUMM-RE dataset is a corpus of meeting-style conversations in French created for the purpose of the SUMM-RE project (ANR-20-CE23-0017). SUMM-RE small is a subset of the full SUMM-RE corpus for which the transcripts have been manually corrected and aligned with the audio down to phoneme level. It can be used for the evaluation of automatic speech recognition and voice activity detection models. The SUMM-RE small subset consists of 10 randomly selected conversations. Each conversation lasts roughly 20 minutes and involves 3-4 speakers. Each participant has an individual microphone and associated .wav file leading to 39 audio files in all. - **Created by:** The corpus was recorded and manually annotated by the Language and Speech Lab (LPL) at the University of Aix-Marseille, France. - **Funded by:** The National Research Agency of France (ANR) for the SUMM-RE project (ANR-20-CE23-0017). - **Shared by:** LINAGORA (coordinator of the SUMM-RE project) - **Language:** French - **License:** CC BY-SA 4.0 ### Dataset Sources <!-- Provide the basic links for the dataset. --> - **Repository:** Both gold corrected and automatic transcripts (produced with Whisper) can be found on [Ortolang](https://www.ortolang.fr/market/corpora/summ-re-asru). - **Paper:** [More Information Needed] ## Uses ### Direct Use This version of SUMM-RE small is designed for the evaluation of automatic speech recognition models and voice activity detection for conversational, spoken French. ### Out-of-Scope Use Due to its size, the corpus is not suitable for model training. ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> - **meeting_id**, e.g. 001a_PARL, includes: - experiment number, e.g. 001 - meeting order: a|b|c (there were three meetings per experiment) - experiment type: E (experiment) | P (pilot experiment) - scenario/topic: A|B|C|D|E - meeting type: R (reporting) | D (decision) | P (planning) - recording location: L (LPL) | H (H2C2 studio) | Z (Zoom) | D (at home) - **speaker_id** - **audio_id**: meeting_id + speaker_id - **audio**: the .wav file for an individual speaker - **transcript**: the manually corrected transcript (corrected from Whisper transcripts) - **ipus**: a list of start and end times for manually annotated interpausal units (units of speech from a single speaker that are separated by silences above a certain threshold) - **words**: a list of start and end times for each word - **phonemes**: a list of start and end times for each phoneme ## Dataset Creation ### Curation Rationale The full SUMM-RE corpus, which includes meeting summaries, is designed to train and evaluate models for meeting summarization. SUMM-RE small is an extract of this corpus used to evaluate various stages of the summarization pipeline, starting with automatic transcription of the audio signal. ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> The SUMM-RE corpus is an original corpus designed by members of LINAGORA and the University of Aix-Marseille and recorded by the latter. #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> Corpus design and production: - University of Aix-Marseille: Océane Granier (corpus conception, recording, annotation), Laurent Prévot (corpus conception, annotatation, supervision), Hiroyoshi Yamasaki (corpus cleaning, alignment and anonymization), Roxanne Bertrand (corpus conception and annotation) with helpful input from Brigitte Bigi and Stéphane Rauzy. - LINAGORA: Julie Hunter, Kate Thompson and Guokan Shang (corpus conception) Corpus participants: - Participants for the in-person conversations were recruited on the University of Aix-Marseille campus. - Participants for the zoom meetings were recruited through [Prolific](https://www.prolific.com/). ### Annotations <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> Principal annotator: Océane Granier Additional assistance from: Laurent Prévot, Hiroyoshi Yamasaki and Roxane Bertrand #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> The audio and transcripts have been (semi-automatically) anonymized. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> Hiroyoshi Yamasaki, Jérôme Louradour, Julie Hunter and Laurent Prévot (2023): "Transcribing and aligning conversational speech: A hybrid pipeline applied to French conversations," Workshop on Automatic Speech Recognition and Understanding. **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed]
linagora/SUMM-RE_sm
[ "task_categories:automatic-speech-recognition", "task_categories:voice-activity-detection", "size_categories:100K<n<1M", "language:fr", "license:cc-by-sa-4.0", "NLP", "conversational", "automatic speech recognition", "voice activity detection", "inter-pausal units", "region:us" ]
2024-01-08T16:34:14+00:00
{"language": ["fr"], "license": "cc-by-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["automatic-speech-recognition", "voice-activity-detection"], "pretty_name": "SUMM-RE small", "dataset_info": {"features": [{"name": "meeting_id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "audio_id", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "transcript", "dtype": "string"}, {"name": "ipus", "list": [{"name": "end", "dtype": "float64"}, {"name": "start", "dtype": "float64"}]}, {"name": "words", "list": [{"name": "end", "dtype": "float64"}, {"name": "start", "dtype": "float64"}, {"name": "word", "dtype": "string"}]}, {"name": "phonemes", "list": [{"name": "end", "dtype": "float64"}, {"name": "phoneme", "dtype": "string"}, {"name": "start", "dtype": "float64"}]}], "splits": [{"name": "train", "num_bytes": 4440887851.0, "num_examples": 39}], "download_size": 4416239830, "dataset_size": 4440887851.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["NLP", "conversational", "automatic speech recognition", "voice activity detection", "inter-pausal units"]}
2024-01-18T22:58:16+00:00
[]
[ "fr" ]
TAGS #task_categories-automatic-speech-recognition #task_categories-voice-activity-detection #size_categories-100K<n<1M #language-French #license-cc-by-sa-4.0 #NLP #conversational #automatic speech recognition #voice activity detection #inter-pausal units #region-us
# Dataset Card for SUMM-RE small Manually corrected transcripts of French conversations, aligned with the audio signal. ## Dataset Details ### Dataset Description The SUMM-RE dataset is a corpus of meeting-style conversations in French created for the purpose of the SUMM-RE project (ANR-20-CE23-0017). SUMM-RE small is a subset of the full SUMM-RE corpus for which the transcripts have been manually corrected and aligned with the audio down to phoneme level. It can be used for the evaluation of automatic speech recognition and voice activity detection models. The SUMM-RE small subset consists of 10 randomly selected conversations. Each conversation lasts roughly 20 minutes and involves 3-4 speakers. Each participant has an individual microphone and associated .wav file leading to 39 audio files in all. - Created by: The corpus was recorded and manually annotated by the Language and Speech Lab (LPL) at the University of Aix-Marseille, France. - Funded by: The National Research Agency of France (ANR) for the SUMM-RE project (ANR-20-CE23-0017). - Shared by: LINAGORA (coordinator of the SUMM-RE project) - Language: French - License: CC BY-SA 4.0 ### Dataset Sources - Repository: Both gold corrected and automatic transcripts (produced with Whisper) can be found on Ortolang. - Paper: ## Uses ### Direct Use This version of SUMM-RE small is designed for the evaluation of automatic speech recognition models and voice activity detection for conversational, spoken French. ### Out-of-Scope Use Due to its size, the corpus is not suitable for model training. ## Dataset Structure - meeting_id, e.g. 001a_PARL, includes: - experiment number, e.g. 001 - meeting order: a|b|c (there were three meetings per experiment) - experiment type: E (experiment) | P (pilot experiment) - scenario/topic: A|B|C|D|E - meeting type: R (reporting) | D (decision) | P (planning) - recording location: L (LPL) | H (H2C2 studio) | Z (Zoom) | D (at home) - speaker_id - audio_id: meeting_id + speaker_id - audio: the .wav file for an individual speaker - transcript: the manually corrected transcript (corrected from Whisper transcripts) - ipus: a list of start and end times for manually annotated interpausal units (units of speech from a single speaker that are separated by silences above a certain threshold) - words: a list of start and end times for each word - phonemes: a list of start and end times for each phoneme ## Dataset Creation ### Curation Rationale The full SUMM-RE corpus, which includes meeting summaries, is designed to train and evaluate models for meeting summarization. SUMM-RE small is an extract of this corpus used to evaluate various stages of the summarization pipeline, starting with automatic transcription of the audio signal. ### Source Data The SUMM-RE corpus is an original corpus designed by members of LINAGORA and the University of Aix-Marseille and recorded by the latter. #### Data Collection and Processing #### Who are the source data producers? Corpus design and production: - University of Aix-Marseille: Océane Granier (corpus conception, recording, annotation), Laurent Prévot (corpus conception, annotatation, supervision), Hiroyoshi Yamasaki (corpus cleaning, alignment and anonymization), Roxanne Bertrand (corpus conception and annotation) with helpful input from Brigitte Bigi and Stéphane Rauzy. - LINAGORA: Julie Hunter, Kate Thompson and Guokan Shang (corpus conception) Corpus participants: - Participants for the in-person conversations were recruited on the University of Aix-Marseille campus. - Participants for the zoom meetings were recruited through Prolific. ### Annotations #### Annotation process #### Who are the annotators? Principal annotator: Océane Granier Additional assistance from: Laurent Prévot, Hiroyoshi Yamasaki and Roxane Bertrand #### Personal and Sensitive Information The audio and transcripts have been (semi-automatically) anonymized. ## Bias, Risks, and Limitations ### Recommendations [optional] Hiroyoshi Yamasaki, Jérôme Louradour, Julie Hunter and Laurent Prévot (2023): "Transcribing and aligning conversational speech: A hybrid pipeline applied to French conversations," Workshop on Automatic Speech Recognition and Understanding. BibTeX: APA: ## Glossary [optional]
[ "# Dataset Card for SUMM-RE small\n\nManually corrected transcripts of French conversations, aligned with the audio signal.", "## Dataset Details", "### Dataset Description\n\nThe SUMM-RE dataset is a corpus of meeting-style conversations in French created for the purpose of the SUMM-RE project (ANR-20-CE23-0017). SUMM-RE small is a subset of the full SUMM-RE corpus for which the transcripts have been manually corrected and aligned with the audio down to phoneme level. It can be used for the evaluation of automatic speech recognition and voice activity detection models.\n\nThe SUMM-RE small subset consists of 10 randomly selected conversations. Each conversation lasts roughly 20 minutes and involves 3-4 speakers. Each participant has an individual microphone and associated .wav file leading to 39 audio files in all. \n\n\n- Created by: The corpus was recorded and manually annotated by the Language and Speech Lab (LPL) at the University of Aix-Marseille, France.\n- Funded by: The National Research Agency of France (ANR) for the SUMM-RE project (ANR-20-CE23-0017).\n- Shared by: LINAGORA (coordinator of the SUMM-RE project)\n- Language: French\n- License: CC BY-SA 4.0", "### Dataset Sources \n\n\n\n- Repository: Both gold corrected and automatic transcripts (produced with Whisper) can be found on Ortolang.\n- Paper:", "## Uses", "### Direct Use\n\nThis version of SUMM-RE small is designed for the evaluation of automatic speech recognition models and voice activity detection for conversational, spoken French.", "### Out-of-Scope Use\n\nDue to its size, the corpus is not suitable for model training.", "## Dataset Structure\n\n\n\n- meeting_id, e.g. 001a_PARL, includes:\n - experiment number, e.g. 001\n - meeting order: a|b|c (there were three meetings per experiment)\n - experiment type: E (experiment) | P (pilot experiment)\n - scenario/topic: A|B|C|D|E\n - meeting type: R (reporting) | D (decision) | P (planning)\n - recording location: L (LPL) | H (H2C2 studio) | Z (Zoom) | D (at home)\n- speaker_id\n- audio_id: meeting_id + speaker_id\n- audio: the .wav file for an individual speaker\n- transcript: the manually corrected transcript (corrected from Whisper transcripts)\n- ipus: a list of start and end times for manually annotated interpausal units (units of speech from a single speaker that are separated by silences above a certain threshold)\n- words: a list of start and end times for each word\n- phonemes: a list of start and end times for each phoneme", "## Dataset Creation", "### Curation Rationale\n\nThe full SUMM-RE corpus, which includes meeting summaries, is designed to train and evaluate models for meeting summarization. SUMM-RE small is an extract of this corpus used to evaluate various stages of the summarization pipeline, starting with automatic transcription of the audio signal.", "### Source Data\n\n\n\nThe SUMM-RE corpus is an original corpus designed by members of LINAGORA and the University of Aix-Marseille and recorded by the latter.", "#### Data Collection and Processing", "#### Who are the source data producers?\n\n\n\nCorpus design and production:\n- University of Aix-Marseille: Océane Granier (corpus conception, recording, annotation), Laurent Prévot (corpus conception, annotatation, supervision), Hiroyoshi Yamasaki (corpus cleaning, alignment and anonymization), Roxanne Bertrand (corpus conception and annotation) with helpful input from Brigitte Bigi and Stéphane Rauzy.\n\n- LINAGORA: Julie Hunter, Kate Thompson and Guokan Shang (corpus conception)\n\nCorpus participants:\n- Participants for the in-person conversations were recruited on the University of Aix-Marseille campus.\n- Participants for the zoom meetings were recruited through Prolific.", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\n\n\nPrincipal annotator: Océane Granier \n\nAdditional assistance from: Laurent Prévot, Hiroyoshi Yamasaki and Roxane Bertrand", "#### Personal and Sensitive Information\n\n\n\nThe audio and transcripts have been (semi-automatically) anonymized.", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\n\n\n[optional]\n\n\n\nHiroyoshi Yamasaki, Jérôme Louradour, Julie Hunter and Laurent Prévot (2023): \"Transcribing and aligning conversational speech: A hybrid pipeline applied to French conversations,\" Workshop on Automatic Speech Recognition and Understanding.\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]" ]
[ "TAGS\n#task_categories-automatic-speech-recognition #task_categories-voice-activity-detection #size_categories-100K<n<1M #language-French #license-cc-by-sa-4.0 #NLP #conversational #automatic speech recognition #voice activity detection #inter-pausal units #region-us \n", "# Dataset Card for SUMM-RE small\n\nManually corrected transcripts of French conversations, aligned with the audio signal.", "## Dataset Details", "### Dataset Description\n\nThe SUMM-RE dataset is a corpus of meeting-style conversations in French created for the purpose of the SUMM-RE project (ANR-20-CE23-0017). SUMM-RE small is a subset of the full SUMM-RE corpus for which the transcripts have been manually corrected and aligned with the audio down to phoneme level. It can be used for the evaluation of automatic speech recognition and voice activity detection models.\n\nThe SUMM-RE small subset consists of 10 randomly selected conversations. Each conversation lasts roughly 20 minutes and involves 3-4 speakers. Each participant has an individual microphone and associated .wav file leading to 39 audio files in all. \n\n\n- Created by: The corpus was recorded and manually annotated by the Language and Speech Lab (LPL) at the University of Aix-Marseille, France.\n- Funded by: The National Research Agency of France (ANR) for the SUMM-RE project (ANR-20-CE23-0017).\n- Shared by: LINAGORA (coordinator of the SUMM-RE project)\n- Language: French\n- License: CC BY-SA 4.0", "### Dataset Sources \n\n\n\n- Repository: Both gold corrected and automatic transcripts (produced with Whisper) can be found on Ortolang.\n- Paper:", "## Uses", "### Direct Use\n\nThis version of SUMM-RE small is designed for the evaluation of automatic speech recognition models and voice activity detection for conversational, spoken French.", "### Out-of-Scope Use\n\nDue to its size, the corpus is not suitable for model training.", "## Dataset Structure\n\n\n\n- meeting_id, e.g. 001a_PARL, includes:\n - experiment number, e.g. 001\n - meeting order: a|b|c (there were three meetings per experiment)\n - experiment type: E (experiment) | P (pilot experiment)\n - scenario/topic: A|B|C|D|E\n - meeting type: R (reporting) | D (decision) | P (planning)\n - recording location: L (LPL) | H (H2C2 studio) | Z (Zoom) | D (at home)\n- speaker_id\n- audio_id: meeting_id + speaker_id\n- audio: the .wav file for an individual speaker\n- transcript: the manually corrected transcript (corrected from Whisper transcripts)\n- ipus: a list of start and end times for manually annotated interpausal units (units of speech from a single speaker that are separated by silences above a certain threshold)\n- words: a list of start and end times for each word\n- phonemes: a list of start and end times for each phoneme", "## Dataset Creation", "### Curation Rationale\n\nThe full SUMM-RE corpus, which includes meeting summaries, is designed to train and evaluate models for meeting summarization. SUMM-RE small is an extract of this corpus used to evaluate various stages of the summarization pipeline, starting with automatic transcription of the audio signal.", "### Source Data\n\n\n\nThe SUMM-RE corpus is an original corpus designed by members of LINAGORA and the University of Aix-Marseille and recorded by the latter.", "#### Data Collection and Processing", "#### Who are the source data producers?\n\n\n\nCorpus design and production:\n- University of Aix-Marseille: Océane Granier (corpus conception, recording, annotation), Laurent Prévot (corpus conception, annotatation, supervision), Hiroyoshi Yamasaki (corpus cleaning, alignment and anonymization), Roxanne Bertrand (corpus conception and annotation) with helpful input from Brigitte Bigi and Stéphane Rauzy.\n\n- LINAGORA: Julie Hunter, Kate Thompson and Guokan Shang (corpus conception)\n\nCorpus participants:\n- Participants for the in-person conversations were recruited on the University of Aix-Marseille campus.\n- Participants for the zoom meetings were recruited through Prolific.", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\n\n\nPrincipal annotator: Océane Granier \n\nAdditional assistance from: Laurent Prévot, Hiroyoshi Yamasaki and Roxane Bertrand", "#### Personal and Sensitive Information\n\n\n\nThe audio and transcripts have been (semi-automatically) anonymized.", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\n\n\n[optional]\n\n\n\nHiroyoshi Yamasaki, Jérôme Louradour, Julie Hunter and Laurent Prévot (2023): \"Transcribing and aligning conversational speech: A hybrid pipeline applied to French conversations,\" Workshop on Automatic Speech Recognition and Understanding.\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]" ]
[ 91, 29, 4, 262, 39, 3, 35, 24, 263, 5, 71, 40, 7, 166, 5, 5, 41, 25, 10, 76, 8 ]
[ "passage: TAGS\n#task_categories-automatic-speech-recognition #task_categories-voice-activity-detection #size_categories-100K<n<1M #language-French #license-cc-by-sa-4.0 #NLP #conversational #automatic speech recognition #voice activity detection #inter-pausal units #region-us \n# Dataset Card for SUMM-RE small\n\nManually corrected transcripts of French conversations, aligned with the audio signal.## Dataset Details### Dataset Description\n\nThe SUMM-RE dataset is a corpus of meeting-style conversations in French created for the purpose of the SUMM-RE project (ANR-20-CE23-0017). SUMM-RE small is a subset of the full SUMM-RE corpus for which the transcripts have been manually corrected and aligned with the audio down to phoneme level. It can be used for the evaluation of automatic speech recognition and voice activity detection models.\n\nThe SUMM-RE small subset consists of 10 randomly selected conversations. Each conversation lasts roughly 20 minutes and involves 3-4 speakers. Each participant has an individual microphone and associated .wav file leading to 39 audio files in all. \n\n\n- Created by: The corpus was recorded and manually annotated by the Language and Speech Lab (LPL) at the University of Aix-Marseille, France.\n- Funded by: The National Research Agency of France (ANR) for the SUMM-RE project (ANR-20-CE23-0017).\n- Shared by: LINAGORA (coordinator of the SUMM-RE project)\n- Language: French\n- License: CC BY-SA 4.0### Dataset Sources \n\n\n\n- Repository: Both gold corrected and automatic transcripts (produced with Whisper) can be found on Ortolang.\n- Paper:## Uses### Direct Use\n\nThis version of SUMM-RE small is designed for the evaluation of automatic speech recognition models and voice activity detection for conversational, spoken French.### Out-of-Scope Use\n\nDue to its size, the corpus is not suitable for model training." ]
ff863eb5b3ede97257eeedcea7cba5f43c0b6296
# Dataset Card for "hotpotqa-de-1k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Snoopy04/hotpotqa-de-1k
[ "region:us" ]
2024-01-08T16:37:12+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1328557, "num_examples": 1000}], "download_size": 749483, "dataset_size": 1328557}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-08T16:37:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hotpotqa-de-1k" More Information needed
[ "# Dataset Card for \"hotpotqa-de-1k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hotpotqa-de-1k\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"hotpotqa-de-1k\"\n\nMore Information needed" ]
2fb820329e911c14d6a97e99732971801a16b905
# Dataset Card for 10-K Benchmark ## Dataset Description This is a synthetically generated financial benchmark for large language models based off of a wide range of 10-K documents. The benchmark was subsequently run on multiple LLMs, which provided candidate answers for evaluation based on both the context and the generated question. The original 10-K dataset contains annual reports of public US companies that filed with the SEC / the EDGAR system from 1993-2020. The link to the original dataset used for context extraction can be found [here](https://huggingface.co/datasets/JanosAudran/financial-reports-sec/blob/main/README.md). GPT-4 was then used in conjunction with these pieces of extracted content to automatically generate an adverserial dataset of complex and challenging questions covering calculations, financial domain expertise and legal / regulatory knowledge. A wide variety of models including GPT-4-Turbo, GPT-3.5-Turbo, Llama-7b-chat-hf, Llama-2-70b-chat-hf, Zephyr-7b-Beta, Solar-10.7b-Instruct-v1.0, Mistral-7B-Instruct-v0.2, Mixtral-8x7B-Instruct-v0.1 were then used to answer the synthetically generated questions using both the "golden context" provided AND their own knowledge. Lastly, GPT-4 was used to evaluate the correctness of each answer that was generated by each of the models. GPT-4 provided (1) a binary "Yes/No" label (which is shared in the dataset), and (2) an accompanying justification (which was manually reviewed by our team, but not included in the dataset). For full details on the dataset processing and overall procedure / findings of this study, please see the accompanying [Medium article](https://medium.com/@samirdutta523/58eade03bb26). ### Usage To load the dataset, run: ```python from datasets import load_dataset ds = load_dataset("adumitrescu18/Financial10kBenchmark") ``` To filter the dataset results by model used (in this case ChatGPT 3.5 Turbo), run: ```python from datasets import load_dataset ds = load_dataset("adumitrescu18/Financial10kBenchmark") ds = ds["train"].filter(lambda x: x["Model Evaluated"] == "ChatGPT 3.5 Turbo") ``` ### Data Fields The dataset has 8 different fields and attributes: | cik | filingDate | docID | generatedQuestion | context | Model Response | Model Evaluated | Model Evaluation | |:-------:|:-----------:|:-----:| :-----:| :-----:| :-----:| :-----:| :-----:| | 10 digit identifier used by SEC for a firm | The date the 10-K report was filed with the SEC | Unique ID for identifying the exact 10-K filing. Unique across all configs and splits. Can be used to identify the document from which the sentence came from. | The question generated from GPT-4 | Sections taken from the dataset to give llms context for question answering | Response of the llm given a question and context | Name of the Model that was used for question answering | GPT-4 Evaluation of the models response |
adumitrescu18/Financial10kBenchmark
[ "license:mit", "region:us" ]
2024-01-08T16:41:30+00:00
{"license": "mit"}
2024-01-09T16:03:46+00:00
[]
[]
TAGS #license-mit #region-us
Dataset Card for 10-K Benchmark =============================== Dataset Description ------------------- This is a synthetically generated financial benchmark for large language models based off of a wide range of 10-K documents. The benchmark was subsequently run on multiple LLMs, which provided candidate answers for evaluation based on both the context and the generated question. The original 10-K dataset contains annual reports of public US companies that filed with the SEC / the EDGAR system from 1993-2020. The link to the original dataset used for context extraction can be found here. GPT-4 was then used in conjunction with these pieces of extracted content to automatically generate an adverserial dataset of complex and challenging questions covering calculations, financial domain expertise and legal / regulatory knowledge. A wide variety of models including GPT-4-Turbo, GPT-3.5-Turbo, Llama-7b-chat-hf, Llama-2-70b-chat-hf, Zephyr-7b-Beta, Solar-10.7b-Instruct-v1.0, Mistral-7B-Instruct-v0.2, Mixtral-8x7B-Instruct-v0.1 were then used to answer the synthetically generated questions using both the "golden context" provided AND their own knowledge. Lastly, GPT-4 was used to evaluate the correctness of each answer that was generated by each of the models. GPT-4 provided (1) a binary "Yes/No" label (which is shared in the dataset), and (2) an accompanying justification (which was manually reviewed by our team, but not included in the dataset). For full details on the dataset processing and overall procedure / findings of this study, please see the accompanying Medium article. ### Usage To load the dataset, run: To filter the dataset results by model used (in this case ChatGPT 3.5 Turbo), run: ### Data Fields The dataset has 8 different fields and attributes:
[ "### Usage\n\n\nTo load the dataset, run:\n\n\nTo filter the dataset results by model used (in this case ChatGPT 3.5 Turbo), run:", "### Data Fields\n\n\nThe dataset has 8 different fields and attributes:" ]
[ "TAGS\n#license-mit #region-us \n", "### Usage\n\n\nTo load the dataset, run:\n\n\nTo filter the dataset results by model used (in this case ChatGPT 3.5 Turbo), run:", "### Data Fields\n\n\nThe dataset has 8 different fields and attributes:" ]
[ 11, 33, 17 ]
[ "passage: TAGS\n#license-mit #region-us \n### Usage\n\n\nTo load the dataset, run:\n\n\nTo filter the dataset results by model used (in this case ChatGPT 3.5 Turbo), run:### Data Fields\n\n\nThe dataset has 8 different fields and attributes:" ]
a4ff97c58889878cd4e7c28df81b33b8fa153334
# Kannada Instruct Dataset This dataset comprises translated instructional pairs used in the training of [Ambari-7B-Instruct](https://huggingface.co/Cognitive-Lab/Ambari-7B-Instruct-v0.1), a language model for natural language understanding. The dataset is structured with four columns: `original_instruction`, `original_output`, `translated_instruction`, and `translated_output`. - **original_instruction** and **original_output** contain English instruction and output pairs, respectively, formatted for instruct fine-tuning. - **translated_instruction** and **translated_output** provide the corresponding translations in Kannada. ## Dataset Structure During the fine-tuning process of [Ambari-7B-Instruct](https://huggingface.co/Cognitive-Lab/Ambari-7B-Instruct-v0.1) on [Ambari-7B-Base](https://huggingface.co/Cognitive-Lab/Ambari-7B-base-v0.1), the dataset was organized to include four types of pairs: 1. English Instruction - English Output 2. English Instruction - Kannada Output 3. Kannada Instruction - English Output 4. Kannada Instruction - Kannada Output This intentional diversification aims to enhance the model's proficiency in cross-lingual tasks, making it more adaptable to various linguistic scenarios. ## Acknowledgement This dataset was curated from the following datasets: - [akoksal/LongForm](https://huggingface.co/datasets/akoksal/LongForm) - [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) - [tatsu-lab/alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) Feel free to explore and utilize this dataset to enhance your understanding of cross-lingual natural language processing tasks. If you find it beneficial, consider citing this repository in your work. ## References - [Ambari-7B-Instruct](https://huggingface.co/Cognitive-Lab/Ambari-7B-Instruct-v0.1) - [Ambari-7B-Base](https://huggingface.co/Cognitive-Lab/Ambari-7B-base-v0.1)
Cognitive-Lab/Kannada-Instruct-dataset
[ "task_categories:question-answering", "size_categories:100K<n<1M", "language:en", "language:kn", "license:mit", "bilingual", "Instruction", "kannada", "region:us" ]
2024-01-08T20:49:22+00:00
{"language": ["en", "kn"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["question-answering"], "pretty_name": "Kannada-instruct", "dataset_info": {"features": [{"name": "original_instruction", "dtype": "string"}, {"name": "original_output", "dtype": "string"}, {"name": "translated_instruction", "dtype": "string"}, {"name": "translated_output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 506651931, "num_examples": 389608}], "download_size": 232649343, "dataset_size": 506651931}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["bilingual", "Instruction", "kannada"]}
2024-01-23T18:14:47+00:00
[]
[ "en", "kn" ]
TAGS #task_categories-question-answering #size_categories-100K<n<1M #language-English #language-Kannada #license-mit #bilingual #Instruction #kannada #region-us
# Kannada Instruct Dataset This dataset comprises translated instructional pairs used in the training of Ambari-7B-Instruct, a language model for natural language understanding. The dataset is structured with four columns: 'original_instruction', 'original_output', 'translated_instruction', and 'translated_output'. - original_instruction and original_output contain English instruction and output pairs, respectively, formatted for instruct fine-tuning. - translated_instruction and translated_output provide the corresponding translations in Kannada. ## Dataset Structure During the fine-tuning process of Ambari-7B-Instruct on Ambari-7B-Base, the dataset was organized to include four types of pairs: 1. English Instruction - English Output 2. English Instruction - Kannada Output 3. Kannada Instruction - English Output 4. Kannada Instruction - Kannada Output This intentional diversification aims to enhance the model's proficiency in cross-lingual tasks, making it more adaptable to various linguistic scenarios. ## Acknowledgement This dataset was curated from the following datasets: - akoksal/LongForm - databricks/databricks-dolly-15k - tatsu-lab/alpaca Feel free to explore and utilize this dataset to enhance your understanding of cross-lingual natural language processing tasks. If you find it beneficial, consider citing this repository in your work. ## References - Ambari-7B-Instruct - Ambari-7B-Base
[ "# Kannada Instruct Dataset\n\nThis dataset comprises translated instructional pairs used in the training of Ambari-7B-Instruct, a language model for natural language understanding. The dataset is structured with four columns: 'original_instruction', 'original_output', 'translated_instruction', and 'translated_output'.\n\n- original_instruction and original_output contain English instruction and output pairs, respectively, formatted for instruct fine-tuning.\n- translated_instruction and translated_output provide the corresponding translations in Kannada.", "## Dataset Structure\n\nDuring the fine-tuning process of Ambari-7B-Instruct on Ambari-7B-Base, the dataset was organized to include four types of pairs:\n\n1. English Instruction - English Output\n2. English Instruction - Kannada Output\n3. Kannada Instruction - English Output\n4. Kannada Instruction - Kannada Output\n\nThis intentional diversification aims to enhance the model's proficiency in cross-lingual tasks, making it more adaptable to various linguistic scenarios.", "## Acknowledgement\n\nThis dataset was curated from the following datasets:\n\n- akoksal/LongForm\n- databricks/databricks-dolly-15k\n- tatsu-lab/alpaca\n\nFeel free to explore and utilize this dataset to enhance your understanding of cross-lingual natural language processing tasks. If you find it beneficial, consider citing this repository in your work.", "## References\n- Ambari-7B-Instruct\n- Ambari-7B-Base" ]
[ "TAGS\n#task_categories-question-answering #size_categories-100K<n<1M #language-English #language-Kannada #license-mit #bilingual #Instruction #kannada #region-us \n", "# Kannada Instruct Dataset\n\nThis dataset comprises translated instructional pairs used in the training of Ambari-7B-Instruct, a language model for natural language understanding. The dataset is structured with four columns: 'original_instruction', 'original_output', 'translated_instruction', and 'translated_output'.\n\n- original_instruction and original_output contain English instruction and output pairs, respectively, formatted for instruct fine-tuning.\n- translated_instruction and translated_output provide the corresponding translations in Kannada.", "## Dataset Structure\n\nDuring the fine-tuning process of Ambari-7B-Instruct on Ambari-7B-Base, the dataset was organized to include four types of pairs:\n\n1. English Instruction - English Output\n2. English Instruction - Kannada Output\n3. Kannada Instruction - English Output\n4. Kannada Instruction - Kannada Output\n\nThis intentional diversification aims to enhance the model's proficiency in cross-lingual tasks, making it more adaptable to various linguistic scenarios.", "## Acknowledgement\n\nThis dataset was curated from the following datasets:\n\n- akoksal/LongForm\n- databricks/databricks-dolly-15k\n- tatsu-lab/alpaca\n\nFeel free to explore and utilize this dataset to enhance your understanding of cross-lingual natural language processing tasks. If you find it beneficial, consider citing this repository in your work.", "## References\n- Ambari-7B-Instruct\n- Ambari-7B-Base" ]
[ 55, 136, 111, 90, 19 ]
[ "passage: TAGS\n#task_categories-question-answering #size_categories-100K<n<1M #language-English #language-Kannada #license-mit #bilingual #Instruction #kannada #region-us \n# Kannada Instruct Dataset\n\nThis dataset comprises translated instructional pairs used in the training of Ambari-7B-Instruct, a language model for natural language understanding. The dataset is structured with four columns: 'original_instruction', 'original_output', 'translated_instruction', and 'translated_output'.\n\n- original_instruction and original_output contain English instruction and output pairs, respectively, formatted for instruct fine-tuning.\n- translated_instruction and translated_output provide the corresponding translations in Kannada.## Dataset Structure\n\nDuring the fine-tuning process of Ambari-7B-Instruct on Ambari-7B-Base, the dataset was organized to include four types of pairs:\n\n1. English Instruction - English Output\n2. English Instruction - Kannada Output\n3. Kannada Instruction - English Output\n4. Kannada Instruction - Kannada Output\n\nThis intentional diversification aims to enhance the model's proficiency in cross-lingual tasks, making it more adaptable to various linguistic scenarios.## Acknowledgement\n\nThis dataset was curated from the following datasets:\n\n- akoksal/LongForm\n- databricks/databricks-dolly-15k\n- tatsu-lab/alpaca\n\nFeel free to explore and utilize this dataset to enhance your understanding of cross-lingual natural language processing tasks. If you find it beneficial, consider citing this repository in your work.## References\n- Ambari-7B-Instruct\n- Ambari-7B-Base" ]
67a735b19a6fbb610c30d18dc283a641af2d5707
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
trumble2024/Rick-bot-flags
[ "region:us" ]
2024-01-08T20:58:30+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]}
2024-01-08T20:58:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 8, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
dcdb260cd059403abe8836f0cee3e7dd7535b863
This dataset is based on the Japanese version of Wikipedia dataset and converted into a multi-turn conversation format using llama2Pro8B. After generating 10,000 conversations and screening, only about 3,000 were usable, so I will publish them in this state first. Since it is a llama2 license, it can be used commercially for services. Some strange dialogue may be included as it has not been screened by humans. We generated 10,000 conversations over 24 hours on an A100 80GBx7 machine and automatically screened them. # Model https://huggingface.co/spaces/TencentARC/LLaMA-Pro-8B-Instruct-Chat # Dataset https://huggingface.co/datasets/izumi-lab/wikipedia-ja-20230720 # Compute by Tsuginosuke AI SuperComputer FreeAI Ltd. https://free-ai.ltd
shi3z/ja_conv_wikipedia_llama2pro8b_3k
[ "task_categories:conversational", "size_categories:1K<n<10K", "language:ja", "license:llama2", "region:us" ]
2024-01-08T23:11:22+00:00
{"language": ["ja"], "license": "llama2", "size_categories": ["1K<n<10K"], "task_categories": ["conversational"]}
2024-01-09T23:18:17+00:00
[]
[ "ja" ]
TAGS #task_categories-conversational #size_categories-1K<n<10K #language-Japanese #license-llama2 #region-us
This dataset is based on the Japanese version of Wikipedia dataset and converted into a multi-turn conversation format using llama2Pro8B. After generating 10,000 conversations and screening, only about 3,000 were usable, so I will publish them in this state first. Since it is a llama2 license, it can be used commercially for services. Some strange dialogue may be included as it has not been screened by humans. We generated 10,000 conversations over 24 hours on an A100 80GBx7 machine and automatically screened them. # Model URL # Dataset URL # Compute by Tsuginosuke AI SuperComputer FreeAI Ltd. URL
[ "# Model\nURL", "# Dataset\nURL", "# Compute by\nTsuginosuke AI SuperComputer\nFreeAI Ltd.\n\nURL" ]
[ "TAGS\n#task_categories-conversational #size_categories-1K<n<10K #language-Japanese #license-llama2 #region-us \n", "# Model\nURL", "# Dataset\nURL", "# Compute by\nTsuginosuke AI SuperComputer\nFreeAI Ltd.\n\nURL" ]
[ 41, 3, 4, 16 ]
[ "passage: TAGS\n#task_categories-conversational #size_categories-1K<n<10K #language-Japanese #license-llama2 #region-us \n# Model\nURL# Dataset\nURL# Compute by\nTsuginosuke AI SuperComputer\nFreeAI Ltd.\n\nURL" ]
f38747cfcc12f0cf6531d0f2e9ebbea07965bd26
## Description News, delivered to you. Better late than never! ## Model SVD ## Voice Cloée # Tags - News # Style live tv, tv channel, news report, news anchor, tv studio # Music breaking news intro, soft, electronic, balearic house ## Prompt Deep News is an AI tube channel generating videos to summarize the news of the day. The channel will focus on good or interesting news about tech, finance, AI, sport, culture etc.
jbilcke-hf/ai-tube-deep-news
[ "license:cc-by-nc-4.0", "region:us" ]
2024-01-09T00:21:12+00:00
{"license": "cc-by-nc-4.0", "pretty_name": "Deep News"}
2024-01-31T20:53:51+00:00
[]
[]
TAGS #license-cc-by-nc-4.0 #region-us
## Description News, delivered to you. Better late than never! ## Model SVD ## Voice Cloée # Tags - News # Style live tv, tv channel, news report, news anchor, tv studio # Music breaking news intro, soft, electronic, balearic house ## Prompt Deep News is an AI tube channel generating videos to summarize the news of the day. The channel will focus on good or interesting news about tech, finance, AI, sport, culture etc.
[ "## Description\n\nNews, delivered to you. Better late than never!", "## Model\n\nSVD", "## Voice\n\nCloée", "# Tags\n\n- News", "# Style\n\nlive tv, tv channel, news report, news anchor, tv studio", "# Music\n\nbreaking news intro, soft, electronic, balearic house", "## Prompt\n\nDeep News is an AI tube channel generating videos to summarize the news of the day.\nThe channel will focus on good or interesting news about tech, finance, AI, sport, culture etc." ]
[ "TAGS\n#license-cc-by-nc-4.0 #region-us \n", "## Description\n\nNews, delivered to you. Better late than never!", "## Model\n\nSVD", "## Voice\n\nCloée", "# Tags\n\n- News", "# Style\n\nlive tv, tv channel, news report, news anchor, tv studio", "# Music\n\nbreaking news intro, soft, electronic, balearic house", "## Prompt\n\nDeep News is an AI tube channel generating videos to summarize the news of the day.\nThe channel will focus on good or interesting news about tech, finance, AI, sport, culture etc." ]
[ 17, 14, 4, 4, 4, 17, 14, 45 ]
[ "passage: TAGS\n#license-cc-by-nc-4.0 #region-us \n## Description\n\nNews, delivered to you. Better late than never!## Model\n\nSVD## Voice\n\nCloée# Tags\n\n- News# Style\n\nlive tv, tv channel, news report, news anchor, tv studio# Music\n\nbreaking news intro, soft, electronic, balearic house## Prompt\n\nDeep News is an AI tube channel generating videos to summarize the news of the day.\nThe channel will focus on good or interesting news about tech, finance, AI, sport, culture etc." ]
bc09d30c14d119ca2f457a569b6ff03d2117e0ea
Collections of pairwise feedback datasets. - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - [openai/webgpt_comparisons](https://huggingface.co/datasets/openai/webgpt_comparisons) - [Dahoas/instruct-synthetic-prompt-responses](https://huggingface.co/datasets/Dahoas/instruct-synthetic-prompt-responses) - [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) - [lmsys/chatbot_arena_conversations](https://huggingface.co/datasets/lmsys/chatbot_arena_conversations) - [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) - [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned) - [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar) Codes to reproduce the dataset: [jdf-prog/UnifiedFeedback](https://github.com/jdf-prog/UnifiedFeedback) ## Dataset formats ```json { "id": "...", "conv_A": [ { "role": "user", "content": "...", }, { "role": "assistant", "content": "...", }, ... ] "conv_B": [ { "role": "user", "content": "...", }, { "role": "assistant", "content": "...", }, ... ] "conv_A_rating": 1, # not necessarily always 1 and 0. The general principle is that higher is better "conv_B_rating": 0, # not necessarily always 1 and 0. The general principle is that higher is better "num_turns": 1, # the number of conversation turns. one user and one assistant counts for 1 turn. "source": "...", # hugging face dataset source } ``` To convert to pairwise prompt and response to train reward models like [PairRM](https://huggingface.co/datasets/llm-blender/PairRM). - For conversations with 1 turn Simply take the user content and assistant content for 1 turn as the input and output. ```python def get_pair_from_conv_for_single_turn(convAs: List[str], convBs: List[str]): """Compare two conversations by takeing USER turns as inputs and ASSISTANT turns as candidates Multi-turn conversations comparison is also supportted. a conversation format is: ```python [ { "content": "hello", "role": "USER" }, { "content": "hi", "role": "ASSISTANT" }, ... ] ``` Args: convAs (List[List[dict]]): List of conversations convAs (List[List[dict]]): List of conversations """ for c in convAs + convBs: assert len(c) % 2 == 0, "Each conversation must have even number of turns" assert all([c[i]['role'].upper() == 'USER' for i in range(0, len(c), 2)]), "Each even turn must be USER" assert all([c[i]['role'].upper() == 'ASSISTANT' for i in range(1, len(c), 2)]), "Each odd turn must be ASSISTANT" # check conversations correctness assert len(convAs) == len(convBs), "Number of conversations must be the same" for c_a, c_b in zip(convAs, convBs): assert len(c_a) == len(c_b), "Number of turns in each conversation must be the same" assert all([c_a[i]['content'] == c_b[i]['content'] for i in range(0, len(c_a), 2)]), "USER turns must be the same" inputs = [ convAs[i][0]['content'] for i in range(len(convAs)) ] cand1_texts = [ convAs[i][1]['content'] for i in range(len(convAs)) ] cand2_texts = [ convBs[i][1]['content'] for i in range(len(convBs)) ] return inputs, cand1_texts, cand2_texts ``` - For conversations with more than 1 turns convert it into input output pairs using the following function: ```python def get_pair_from_conv(convAs: List[str], convBs: List[str]): """Compare two conversations by takeing USER turns as inputs and ASSISTANT turns as candidates Multi-turn conversations comparison is also supportted. a conversation format is: ```python [ { "content": "hello", "role": "USER" }, { "content": "hi", "role": "ASSISTANT" }, ... ] ``` Args: convAs (List[List[dict]]): List of conversations convAs (List[List[dict]]): List of conversations """ for c in convAs + convBs: assert len(c) % 2 == 0, "Each conversation must have even number of turns" assert all([c[i]['role'].upper() == 'USER' for i in range(0, len(c), 2)]), "Each even turn must be USER" assert all([c[i]['role'].upper() == 'ASSISTANT' for i in range(1, len(c), 2)]), "Each odd turn must be ASSISTANT" # check conversations correctness assert len(convAs) == len(convBs), "Number of conversations must be the same" for c_a, c_b in zip(convAs, convBs): assert len(c_a) == len(c_b), "Number of turns in each conversation must be the same" assert all([c_a[i]['content'] == c_b[i]['content'] for i in range(0, len(c_a), 2)]), "USER turns must be the same" instructions = ["Finish the following coversation in each i-th turn by filling in <Response i> with your response."] * len(convAs) inputs = [ "\n".join([ "USER: " + x[i]['content'] + f"\nAssistant: <Response {i//2+1}>" for i in range(0, len(x), 2) ]) for x in convAs ] cand1_texts = [ "\n".join([ f"<Response {i//2+1}>: " + x[i]['content'] for i in range(1, len(x), 2) ]) for x in convAs ] cand2_texts = [ "\n".join([ f"<Response {i//2+1}>: " + x[i]['content'] for i in range(1, len(x), 2) ]) for x in convBs ] inputs = [inst + inp for inst, inp in zip(instructions, inputs)] return inputs, cand1_texts, cand2_texts ```
llm-blender/Unified-Feedback
[ "license:mit", "region:us" ]
2024-01-09T00:46:23+00:00
{"license": "mit", "configs": [{"config_name": "Nectar", "data_files": [{"split": "train", "path": "Nectar/train-*"}, {"split": "val", "path": "Nectar/val-*"}]}, {"config_name": "UltraFeedback", "data_files": [{"split": "train", "path": "UltraFeedback/train-*"}, {"split": "val", "path": "UltraFeedback/val-*"}]}, {"config_name": "all", "data_files": [{"split": "train", "path": "all/train-*"}, {"split": "val", "path": "all/val-*"}]}, {"config_name": "chatbot_arena_conversations", "data_files": [{"split": "train", "path": "chatbot_arena_conversations/train-*"}, {"split": "val", "path": "chatbot_arena_conversations/val-*"}]}, {"config_name": "hh-rlhf", "data_files": [{"split": "train", "path": "hh-rlhf/train-*"}, {"split": "val", "path": "hh-rlhf/val-*"}]}, {"config_name": "summarize_from_feedback", "data_files": [{"split": "train", "path": "summarize_from_feedback/train-*"}, {"split": "val", "path": "summarize_from_feedback/val-*"}]}, {"config_name": "synthetic-instruct-gptj-pairwise", "data_files": [{"split": "train", "path": "synthetic-instruct-gptj-pairwise/train-*"}, {"split": "val", "path": "synthetic-instruct-gptj-pairwise/val-*"}]}, {"config_name": "ultrafeedback-binarized-preferences-cleaned", "data_files": [{"split": "train", "path": "ultrafeedback-binarized-preferences-cleaned/train-*"}, {"split": "val", "path": "ultrafeedback-binarized-preferences-cleaned/val-*"}]}, {"config_name": "webgpt_comparisons", "data_files": [{"split": "train", "path": "webgpt_comparisons/train-*"}, {"split": "val", "path": "webgpt_comparisons/val-*"}]}], "dataset_info": [{"config_name": "Nectar", "features": [{"name": "id", "dtype": "string"}, {"name": "conv_A", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_B", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_A_rating", "dtype": "float32"}, {"name": "conv_B_rating", "dtype": "float32"}, {"name": "num_turns", "dtype": "int32"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1081068032, "num_examples": 364908}, {"name": "val", "num_bytes": 3078273, "num_examples": 1000}], "download_size": 795012574, "dataset_size": 1084146305}, {"config_name": "UltraFeedback", "features": [{"name": "id", "dtype": "string"}, {"name": "conv_A", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_B", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_A_rating", "dtype": "float32"}, {"name": "conv_B_rating", "dtype": "float32"}, {"name": "num_turns", "dtype": "int32"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 482967411, "num_examples": 126932}, {"name": "val", "num_bytes": 3796982, "num_examples": 1000}], "download_size": 177911526, "dataset_size": 486764393}, {"config_name": "all", "features": [{"name": "id", "dtype": "string"}, {"name": "conv_A", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_B", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_A_rating", "dtype": "float32"}, {"name": "conv_B_rating", "dtype": "float32"}, {"name": "num_turns", "dtype": "int32"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2543590252, "num_examples": 886136}, {"name": "val", "num_bytes": 21099375, "num_examples": 8000}], "download_size": 985151681, "dataset_size": 2564689627}, {"config_name": "chatbot_arena_conversations", "features": [{"name": "id", "dtype": "string"}, {"name": "conv_A", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_B", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_A_rating", "dtype": "float32"}, {"name": "conv_B_rating", "dtype": "float32"}, {"name": "num_turns", "dtype": "int32"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 74939906, "num_examples": 32000}, {"name": "val", "num_bytes": 2316679, "num_examples": 1000}], "download_size": 223568621, "dataset_size": 77256585}, {"config_name": "hh-rlhf", "features": [{"name": "id", "dtype": "string"}, {"name": "conv_A", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_B", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_A_rating", "dtype": "float32"}, {"name": "conv_B_rating", "dtype": "float32"}, {"name": "num_turns", "dtype": "int32"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 320647584, "num_examples": 159800}, {"name": "val", "num_bytes": 1962414, "num_examples": 1000}], "download_size": 351983129, "dataset_size": 322609998}, {"config_name": "summarize_from_feedback", "features": [{"name": "id", "dtype": "string"}, {"name": "conv_A", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_B", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_A_rating", "dtype": "float32"}, {"name": "conv_B_rating", "dtype": "float32"}, {"name": "num_turns", "dtype": "int32"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 274935253, "num_examples": 91858}, {"name": "val", "num_bytes": 2971753, "num_examples": 1000}], "download_size": 65500776, "dataset_size": 277907006}, {"config_name": "synthetic-instruct-gptj-pairwise", "features": [{"name": "id", "dtype": "string"}, {"name": "conv_A", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_B", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_A_rating", "dtype": "float32"}, {"name": "conv_B_rating", "dtype": "float32"}, {"name": "num_turns", "dtype": "int32"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37011887, "num_examples": 32143}, {"name": "val", "num_bytes": 1141460, "num_examples": 1000}], "download_size": 59370567, "dataset_size": 38153347}, {"config_name": "ultrafeedback-binarized-preferences-cleaned", "features": [{"name": "id", "dtype": "string"}, {"name": "conv_A", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_B", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_A_rating", "dtype": "float32"}, {"name": "conv_B_rating", "dtype": "float32"}, {"name": "num_turns", "dtype": "int32"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 242561699, "num_examples": 59917}, {"name": "val", "num_bytes": 4009990, "num_examples": 1000}], "download_size": 244276692, "dataset_size": 246571689}, {"config_name": "webgpt", "features": [{"name": "id", "dtype": "string"}, {"name": "conv_A", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_B", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_A_rating", "dtype": "float32"}, {"name": "conv_B_rating", "dtype": "float32"}, {"name": "num_turns", "dtype": "int32"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37161667, "num_examples": 18578}, {"name": "val", "num_bytes": 1990672, "num_examples": 1000}], "download_size": 22652378, "dataset_size": 39152339}, {"config_name": "webgpt_comparisons", "features": [{"name": "id", "dtype": "string"}, {"name": "conv_A", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_B", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "conv_A_rating", "dtype": "float32"}, {"name": "conv_B_rating", "dtype": "float32"}, {"name": "num_turns", "dtype": "int32"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37384603, "num_examples": 18578}, {"name": "val", "num_bytes": 2002672, "num_examples": 1000}], "download_size": 45308694, "dataset_size": 39387275}]}
2024-01-29T18:46:34+00:00
[]
[]
TAGS #license-mit #region-us
Collections of pairwise feedback datasets. - openai/summarize_from_feedback - openai/webgpt_comparisons - Dahoas/instruct-synthetic-prompt-responses - Anthropic/hh-rlhf - lmsys/chatbot_arena_conversations - openbmb/UltraFeedback - argilla/ultrafeedback-binarized-preferences-cleaned - berkeley-nest/Nectar Codes to reproduce the dataset: jdf-prog/UnifiedFeedback ## Dataset formats To convert to pairwise prompt and response to train reward models like PairRM. - For conversations with 1 turn Simply take the user content and assistant content for 1 turn as the input and output. python [ { "content": "hello", "role": "USER" }, { "content": "hi", "role": "ASSISTANT" }, ... ] - For conversations with more than 1 turns convert it into input output pairs using the following function: python [ { "content": "hello", "role": "USER" }, { "content": "hi", "role": "ASSISTANT" }, ... ]
[ "## Dataset formats\n\n\n\nTo convert to pairwise prompt and response to train reward models like PairRM.\n\n- For conversations with 1 turn\n\nSimply take the user content and assistant content for 1 turn as the input and output.\npython\n [\n {\n \"content\": \"hello\",\n \"role\": \"USER\"\n },\n {\n \"content\": \"hi\",\n \"role\": \"ASSISTANT\"\n },\n ...\n ]\n \n\n\n- For conversations with more than 1 turns\n\nconvert it into input output pairs using the following function:\npython\n [\n {\n \"content\": \"hello\",\n \"role\": \"USER\"\n },\n {\n \"content\": \"hi\",\n \"role\": \"ASSISTANT\"\n },\n ...\n ]" ]
[ "TAGS\n#license-mit #region-us \n", "## Dataset formats\n\n\n\nTo convert to pairwise prompt and response to train reward models like PairRM.\n\n- For conversations with 1 turn\n\nSimply take the user content and assistant content for 1 turn as the input and output.\npython\n [\n {\n \"content\": \"hello\",\n \"role\": \"USER\"\n },\n {\n \"content\": \"hi\",\n \"role\": \"ASSISTANT\"\n },\n ...\n ]\n \n\n\n- For conversations with more than 1 turns\n\nconvert it into input output pairs using the following function:\npython\n [\n {\n \"content\": \"hello\",\n \"role\": \"USER\"\n },\n {\n \"content\": \"hi\",\n \"role\": \"ASSISTANT\"\n },\n ...\n ]" ]
[ 11, 159 ]
[ "passage: TAGS\n#license-mit #region-us \n## Dataset formats\n\n\n\nTo convert to pairwise prompt and response to train reward models like PairRM.\n\n- For conversations with 1 turn\n\nSimply take the user content and assistant content for 1 turn as the input and output.\npython\n [\n {\n \"content\": \"hello\",\n \"role\": \"USER\"\n },\n {\n \"content\": \"hi\",\n \"role\": \"ASSISTANT\"\n },\n ...\n ]\n \n\n\n- For conversations with more than 1 turns\n\nconvert it into input output pairs using the following function:\npython\n [\n {\n \"content\": \"hello\",\n \"role\": \"USER\"\n },\n {\n \"content\": \"hi\",\n \"role\": \"ASSISTANT\"\n },\n ...\n ]" ]
1238a910eebbb3b09d94d321af8b957cb60764f7
The dataset encompasses news articles spanning from November 29, 2023, to February 6, 2024, capturing the discourse surrounding the five presidential debates orchestrated by the General Elections Commission. Sourced from reputable platforms such as detik, kompas, and liputan6, the dataset offers a comprehensive insight into the electoral landscape and the media coverage thereof.
casecrit/2024-indonesian-election
[ "size_categories:10M<n<100M", "language:id", "license:cc-by-nc-2.0", "doi:10.57967/hf/1756", "region:us" ]
2024-01-09T03:19:10+00:00
{"language": ["id"], "license": "cc-by-nc-2.0", "size_categories": ["10M<n<100M"]}
2024-02-10T15:21:59+00:00
[]
[ "id" ]
TAGS #size_categories-10M<n<100M #language-Indonesian #license-cc-by-nc-2.0 #doi-10.57967/hf/1756 #region-us
The dataset encompasses news articles spanning from November 29, 2023, to February 6, 2024, capturing the discourse surrounding the five presidential debates orchestrated by the General Elections Commission. Sourced from reputable platforms such as detik, kompas, and liputan6, the dataset offers a comprehensive insight into the electoral landscape and the media coverage thereof.
[]
[ "TAGS\n#size_categories-10M<n<100M #language-Indonesian #license-cc-by-nc-2.0 #doi-10.57967/hf/1756 #region-us \n" ]
[ 46 ]
[ "passage: TAGS\n#size_categories-10M<n<100M #language-Indonesian #license-cc-by-nc-2.0 #doi-10.57967/hf/1756 #region-us \n" ]
f3112d625202c271562ff5f8d796f967ee201b7f
# Dataset Card for "metamathqa_binarized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jan-hq/metamathqa_binarized
[ "region:us" ]
2024-01-09T03:30:21+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 264809523.6, "num_examples": 355500}, {"name": "test", "num_bytes": 29423280.4, "num_examples": 39500}], "download_size": 129428148, "dataset_size": 294232804.0}}
2024-01-09T03:30:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "metamathqa_binarized" More Information needed
[ "# Dataset Card for \"metamathqa_binarized\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"metamathqa_binarized\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"metamathqa_binarized\"\n\nMore Information needed" ]
9cbd3463039ea84aa3a28e3f090432d82750fc30
<img src="https://cdn-uploads.huggingface.co/production/uploads/6303f37c3926de1f7ec42d3e/Ufp7Ec0nfJblft2VXXZBa.png" width="512" height="512"> **PixelParti128** is a synthetic dataset derived from SDXL Turbo and Pixel Art XL. Contains 4 samples of each prompt in <a href="https://github.com/google-research/parti">PartiPrompts</a> PixelParti128 v0.1 is a collection of ~4800 synthetic samples with their corresponding prompt and seed. Each sample is downscaled and quantized to 128px making them pixel perfect. ## Replication You can replicate this dataset using SDXL Turbo with Pixel Art XL, 8 steps of Euler a, CFG = 1 and Diffusers. <img src="https://cdn-uploads.huggingface.co/production/uploads/6303f37c3926de1f7ec42d3e/GUDdifE0G2NQxOTX9zl0t.jpeg" width="512" height="512"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6303f37c3926de1f7ec42d3e/rgEcDVHYjRPLf1NZ5Hssq.jpeg" width="512" height="512"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6303f37c3926de1f7ec42d3e/yJEWENiVtNqO-yr7Kp4ul.jpeg" width="512" height="512">
nerijs/pixelparti-128-v0.1
[ "license:apache-2.0", "region:us" ]
2024-01-09T03:58:41+00:00
{"license": "apache-2.0", "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 34521297.512, "num_examples": 4836}], "download_size": 34764278, "dataset_size": 34521297.512}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-09T05:06:34+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
<img src="URL width="512" height="512"> PixelParti128 is a synthetic dataset derived from SDXL Turbo and Pixel Art XL. Contains 4 samples of each prompt in <a href="URL PixelParti128 v0.1 is a collection of ~4800 synthetic samples with their corresponding prompt and seed. Each sample is downscaled and quantized to 128px making them pixel perfect. ## Replication You can replicate this dataset using SDXL Turbo with Pixel Art XL, 8 steps of Euler a, CFG = 1 and Diffusers. <img src="URL width="512" height="512"> <img src="URL width="512" height="512"> <img src="URL width="512" height="512">
[ "## Replication\nYou can replicate this dataset using SDXL Turbo with Pixel Art XL, 8 steps of Euler a, CFG = 1 and Diffusers.\n\n<img src=\"URL width=\"512\" height=\"512\">\n<img src=\"URL width=\"512\" height=\"512\">\n<img src=\"URL width=\"512\" height=\"512\">" ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "## Replication\nYou can replicate this dataset using SDXL Turbo with Pixel Art XL, 8 steps of Euler a, CFG = 1 and Diffusers.\n\n<img src=\"URL width=\"512\" height=\"512\">\n<img src=\"URL width=\"512\" height=\"512\">\n<img src=\"URL width=\"512\" height=\"512\">" ]
[ 14, 89 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n## Replication\nYou can replicate this dataset using SDXL Turbo with Pixel Art XL, 8 steps of Euler a, CFG = 1 and Diffusers.\n\n<img src=\"URL width=\"512\" height=\"512\">\n<img src=\"URL width=\"512\" height=\"512\">\n<img src=\"URL width=\"512\" height=\"512\">" ]
2dbaa90cb0b3cd30d295b388195f217f155d1ca6
# Dataset Card for "distil_math_dpo_binarized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jan-hq/distil_math_dpo_binarized
[ "region:us" ]
2024-01-09T04:34:27+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 9487797, "num_examples": 2176}, {"name": "test", "num_bytes": 985660, "num_examples": 242}], "download_size": 4616780, "dataset_size": 10473457}}
2024-02-16T16:14:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for "distil_math_dpo_binarized" More Information needed
[ "# Dataset Card for \"distil_math_dpo_binarized\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"distil_math_dpo_binarized\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"distil_math_dpo_binarized\"\n\nMore Information needed" ]
d3b62d5f69ad7dfc606fad5bad06fd56b7903544
# Dataset Card for "athirdpath_roleplay_dpo_binarized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jan-hq/athirdpath_roleplay_dpo_binarized
[ "region:us" ]
2024-01-09T04:38:00+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 10769968.084889147, "num_examples": 3085}, {"name": "test", "num_bytes": 1197438.9151108519, "num_examples": 343}], "download_size": 4623261, "dataset_size": 11967407.0}}
2024-01-09T04:38:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "athirdpath_roleplay_dpo_binarized" More Information needed
[ "# Dataset Card for \"athirdpath_roleplay_dpo_binarized\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"athirdpath_roleplay_dpo_binarized\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"athirdpath_roleplay_dpo_binarized\"\n\nMore Information needed" ]
bd7d81607cbd4dfa6adba3b29608be461d21e9f1
# Dataset Card for "orca_dpo_binarized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jan-hq/orca_dpo_binarized
[ "region:us" ]
2024-01-09T04:40:15+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 42576577.4042305, "num_examples": 11573}, {"name": "test", "num_bytes": 4731139.5957695, "num_examples": 1286}], "download_size": 24313742, "dataset_size": 47307717.0}}
2024-01-09T04:40:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "orca_dpo_binarized" More Information needed
[ "# Dataset Card for \"orca_dpo_binarized\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"orca_dpo_binarized\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"orca_dpo_binarized\"\n\nMore Information needed" ]
f9450ac8fe855f463a2a72695802f2a7158e6d0a
# Dataset Card for "tc_dpo_binarized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jan-hq/tc_dpo_binarized
[ "region:us" ]
2024-01-09T04:42:48+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 958673.2682119205, "num_examples": 271}, {"name": "test", "num_bytes": 109663.73178807947, "num_examples": 31}], "download_size": 517803, "dataset_size": 1068337.0}}
2024-01-09T04:42:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "tc_dpo_binarized" More Information needed
[ "# Dataset Card for \"tc_dpo_binarized\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"tc_dpo_binarized\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"tc_dpo_binarized\"\n\nMore Information needed" ]
b9006faf7d369fe186831326bd7e755e4c90aed3
# Dataset Card for "hh_rlhf_reversed_binarized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jan-hq/hh_rlhf_reversed_binarized
[ "region:us" ]
2024-01-09T04:50:15+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 312180198, "num_examples": 160800}, {"name": "test", "num_bytes": 16755445, "num_examples": 8552}], "download_size": 181796170, "dataset_size": 328935643}}
2024-01-09T19:14:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hh_rlhf_reversed_binarized" More Information needed
[ "# Dataset Card for \"hh_rlhf_reversed_binarized\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hh_rlhf_reversed_binarized\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"hh_rlhf_reversed_binarized\"\n\nMore Information needed" ]
277e9828e78845f16594e18f7143ce7755fe736f
# Dataset Card for "synthetic_gpt35_dpo_binarized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jan-hq/synthetic_gpt35_dpo_binarized
[ "region:us" ]
2024-01-09T04:57:40+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 203907195.57525238, "num_examples": 184090}, {"name": "test", "num_bytes": 22656970.424747612, "num_examples": 20455}], "download_size": 118136902, "dataset_size": 226564166.0}}
2024-01-09T04:58:06+00:00
[]
[]
TAGS #region-us
# Dataset Card for "synthetic_gpt35_dpo_binarized" More Information needed
[ "# Dataset Card for \"synthetic_gpt35_dpo_binarized\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"synthetic_gpt35_dpo_binarized\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"synthetic_gpt35_dpo_binarized\"\n\nMore Information needed" ]
45ac5e01122055b50bc3cddb3c925df1323d5bc5
Just copied from migtissera/Hitchhiker
adi-kmt/hitchiker
[ "task_categories:text-generation", "language:en", "license:apache-2.0", "region:us" ]
2024-01-09T05:04:49+00:00
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation"]}
2024-01-09T05:06:33+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #language-English #license-apache-2.0 #region-us
Just copied from migtissera/Hitchhiker
[]
[ "TAGS\n#task_categories-text-generation #language-English #license-apache-2.0 #region-us \n" ]
[ 29 ]
[ "passage: TAGS\n#task_categories-text-generation #language-English #license-apache-2.0 #region-us \n" ]
8ebe7d5ff8f5afef69d231e565409da9610a7ea6
alpaca formatted migtissera/Synthia-v1.3
adi-kmt/sythia_alpaca
[ "task_categories:text-generation", "language:en", "license:apache-2.0", "region:us" ]
2024-01-09T05:27:10+00:00
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation"]}
2024-01-09T12:55:37+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #language-English #license-apache-2.0 #region-us
alpaca formatted migtissera/Synthia-v1.3
[]
[ "TAGS\n#task_categories-text-generation #language-English #license-apache-2.0 #region-us \n" ]
[ 29 ]
[ "passage: TAGS\n#task_categories-text-generation #language-English #license-apache-2.0 #region-us \n" ]
1228cbd81cf2f63c0115f813a7a202b668f2121a
# Dataset Card for "truthful_qa" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jxie/truthful_qa
[ "region:us" ]
2024-01-09T06:32:43+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 187702, "num_examples": 817}], "download_size": 98678, "dataset_size": 187702}}
2024-01-16T01:02:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "truthful_qa" More Information needed
[ "# Dataset Card for \"truthful_qa\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"truthful_qa\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"truthful_qa\"\n\nMore Information needed" ]
ecd9731454ffc86161b5f9837f60ed5a3151047f
# Dataset Card for "TW-Kai_all_512" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mwkldeveloper/TW-Kai_all_512
[ "region:us" ]
2024-01-09T06:37:50+00:00
{"dataset_info": {"features": [{"name": "char", "dtype": "string"}, {"name": "unicode", "dtype": "string"}, {"name": "images", "dtype": "image"}, {"name": "stroke", "dtype": "int32"}, {"name": "strokes_sequence", "sequence": "int32"}, {"name": "components", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 2628424996.75, "num_examples": 76282}], "download_size": 1804411774, "dataset_size": 2628424996.75}}
2024-01-09T06:40:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "TW-Kai_all_512" More Information needed
[ "# Dataset Card for \"TW-Kai_all_512\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"TW-Kai_all_512\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"TW-Kai_all_512\"\n\nMore Information needed" ]
09aeacf04b13c44530babeb3654132220efdc02f
# Dataset Card for "llama-classifier-data-v5" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
magnifi/llama-classifier-data-v5
[ "region:us" ]
2024-01-09T06:40:42+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "user_text", "dtype": "string"}, {"name": "true_intent", "dtype": "string"}, {"name": "completion", "dtype": "string"}, {"name": "parsed_entities", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1220832, "num_examples": 7496}, {"name": "validation", "num_bytes": 485480, "num_examples": 2809}], "download_size": 251332, "dataset_size": 1706312}}
2024-01-09T06:40:45+00:00
[]
[]
TAGS #region-us
# Dataset Card for "llama-classifier-data-v5" More Information needed
[ "# Dataset Card for \"llama-classifier-data-v5\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"llama-classifier-data-v5\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"llama-classifier-data-v5\"\n\nMore Information needed" ]
220ab7687e3081d8eafd54aca6ef03e23ee08086
# Dataset Card for "uf_safe_v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yimingzhang/uf_safe_v1
[ "region:us" ]
2024-01-09T06:50:34+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train_prefs", "path": "data/train_prefs-*"}, {"split": "test_prefs", "path": "data/test_prefs-*"}]}], "dataset_info": {"features": [{"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train_prefs", "num_bytes": 794558, "num_examples": 1766}, {"name": "test_prefs", "num_bytes": 39816, "num_examples": 86}], "download_size": 468541, "dataset_size": 834374}}
2024-01-09T06:50:36+00:00
[]
[]
TAGS #region-us
# Dataset Card for "uf_safe_v1" More Information needed
[ "# Dataset Card for \"uf_safe_v1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"uf_safe_v1\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"uf_safe_v1\"\n\nMore Information needed" ]
86c19431003c19a966c3d94fbe967dbc3f2e4fdc
Kannada translation of nvidia/HelpSteer
Tensoic/nvidia_helpsteer_kn
[ "task_categories:text-generation", "language:kn", "license:cc-by-4.0", "region:us" ]
2024-01-09T07:37:28+00:00
{"language": ["kn"], "license": "cc-by-4.0", "task_categories": ["text-generation"]}
2024-01-09T12:55:00+00:00
[]
[ "kn" ]
TAGS #task_categories-text-generation #language-Kannada #license-cc-by-4.0 #region-us
Kannada translation of nvidia/HelpSteer
[]
[ "TAGS\n#task_categories-text-generation #language-Kannada #license-cc-by-4.0 #region-us \n" ]
[ 31 ]
[ "passage: TAGS\n#task_categories-text-generation #language-Kannada #license-cc-by-4.0 #region-us \n" ]
5d231d9c75c0568e4393b4c099860d3469a578be
# Dataset Summary Vietnamese-version of MedNLI. The data has been used as a benchmark for [evaluating a Vietnamese Biomedical-domain Transformer model](https://arxiv.org/abs/2210.05598). # Citation Please cite this paper if you use this dataset: ``` @misc{vipubmed, doi = {10.48550/ARXIV.2210.05598}, url = {https://arxiv.org/abs/2210.05598}, author = {Phan, Long and Dang, Tai and Tran, Hieu and Phan, Vy and Chau, Lam D. and Trinh, Trieu H.}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Enriching Biomedical Knowledge for Vietnamese Low-resource Language Through Large-Scale Translation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
VietAI/vi_mednli
[ "task_categories:text-classification", "language:vi", "license:apache-2.0", "arxiv:2210.05598", "region:us" ]
2024-01-09T08:42:07+00:00
{"language": ["vi"], "license": "apache-2.0", "task_categories": ["text-classification"]}
2024-01-09T08:47:34+00:00
[ "2210.05598" ]
[ "vi" ]
TAGS #task_categories-text-classification #language-Vietnamese #license-apache-2.0 #arxiv-2210.05598 #region-us
# Dataset Summary Vietnamese-version of MedNLI. The data has been used as a benchmark for evaluating a Vietnamese Biomedical-domain Transformer model. Please cite this paper if you use this dataset:
[ "# Dataset Summary\nVietnamese-version of MedNLI. The data has been used as a benchmark for evaluating a Vietnamese Biomedical-domain Transformer model.\n\n\nPlease cite this paper if you use this dataset:" ]
[ "TAGS\n#task_categories-text-classification #language-Vietnamese #license-apache-2.0 #arxiv-2210.05598 #region-us \n", "# Dataset Summary\nVietnamese-version of MedNLI. The data has been used as a benchmark for evaluating a Vietnamese Biomedical-domain Transformer model.\n\n\nPlease cite this paper if you use this dataset:" ]
[ 41, 49 ]
[ "passage: TAGS\n#task_categories-text-classification #language-Vietnamese #license-apache-2.0 #arxiv-2210.05598 #region-us \n# Dataset Summary\nVietnamese-version of MedNLI. The data has been used as a benchmark for evaluating a Vietnamese Biomedical-domain Transformer model.\n\n\nPlease cite this paper if you use this dataset:" ]
7501416c4ac9129a58d86852c20f99cfb044cdc1
- `transformersbook/codeparrot-train` 데이터 셋에서 Data Science관련 코드만 추출하고 `huggingface-course/code-search-net-tokenizer` 를 사용해 토큰화를 마친 python 코드 데이터셋 - A python code dataset extracting only Data Science related code from the `transformersbook/codeparrot-train` dataset and tokenized using `huggingface-course/code-search-net-tokenizer`.
metamath/codeparrot-ds-tokenized-128
[ "region:us" ]
2024-01-09T09:53:56+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 8618263476, "num_examples": 16702061}, {"name": "valid", "num_bytes": 48072624, "num_examples": 93164}], "download_size": 3804670335, "dataset_size": 8666336100}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}]}]}
2024-01-09T10:07:54+00:00
[]
[]
TAGS #region-us
- 'transformersbook/codeparrot-train' 데이터 셋에서 Data Science관련 코드만 추출하고 'huggingface-course/code-search-net-tokenizer' 를 사용해 토큰화를 마친 python 코드 데이터셋 - A python code dataset extracting only Data Science related code from the 'transformersbook/codeparrot-train' dataset and tokenized using 'huggingface-course/code-search-net-tokenizer'.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
8699c9074515a6ef91351675769c69cda55a960c
# Dataset Card for "glaive-function-calling-v2-formatted" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
togethercomputer/glaive-function-calling-v2-formatted
[ "region:us" ]
2024-01-09T09:58:28+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "messages", "dtype": "string"}, {"name": "tools", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 553629114.1147472, "num_examples": 111944}, {"name": "test", "num_bytes": 4945589.885252869, "num_examples": 1000}], "download_size": 200615118, "dataset_size": 558574704.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-21T23:15:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "glaive-function-calling-v2-formatted" More Information needed
[ "# Dataset Card for \"glaive-function-calling-v2-formatted\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"glaive-function-calling-v2-formatted\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"glaive-function-calling-v2-formatted\"\n\nMore Information needed" ]
e449660d39ec83c4ccf0bff2dcfb9bbf6943ab89
# README ## Description The second version of the PubChem324k Dataset used in the paper: [MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter](https://arxiv.org/abs/2310.12798) ## Load Example ```python import torch from torch_geometric.data import InMemoryDataset class PubChemDataset(InMemoryDataset): def __init__(self, path): super(PubChemDataset, self).__init__() self.data, self.slices = torch.load(path) def __getitem__(self, idx): return self.get(idx) if __name__ == '__main__': dataset = PubChemDataset('./pretrain.pt') print(dataset[0]) ```
acharkq/PubChem324kV2
[ "arxiv:2310.12798", "region:us" ]
2024-01-09T10:00:27+00:00
{}
2024-01-09T10:03:43+00:00
[ "2310.12798" ]
[]
TAGS #arxiv-2310.12798 #region-us
# README ## Description The second version of the PubChem324k Dataset used in the paper: MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter ## Load Example
[ "# README", "## Description \n\nThe second version of the PubChem324k Dataset used in the paper: MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter", "## Load Example" ]
[ "TAGS\n#arxiv-2310.12798 #region-us \n", "# README", "## Description \n\nThe second version of the PubChem324k Dataset used in the paper: MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter", "## Load Example" ]
[ 15, 3, 47, 5 ]
[ "passage: TAGS\n#arxiv-2310.12798 #region-us \n# README## Description \n\nThe second version of the PubChem324k Dataset used in the paper: MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter## Load Example" ]
6e14910afefb848993fa1fc2ebf107e25c3b8104
10K Vietnamese abstracts extracted from `VietAI/vi_pubmed` for demo purposes only.
razent/vi_pubmed_small
[ "size_categories:10K<n<100K", "language:vi", "region:us" ]
2024-01-09T10:02:42+00:00
{"language": ["vi"], "size_categories": ["10K<n<100K"]}
2024-01-09T10:29:49+00:00
[]
[ "vi" ]
TAGS #size_categories-10K<n<100K #language-Vietnamese #region-us
10K Vietnamese abstracts extracted from 'VietAI/vi_pubmed' for demo purposes only.
[]
[ "TAGS\n#size_categories-10K<n<100K #language-Vietnamese #region-us \n" ]
[ 25 ]
[ "passage: TAGS\n#size_categories-10K<n<100K #language-Vietnamese #region-us \n" ]
8ee7c43c216ec687baf1992d13ca0f22cf027197
- `transformersbook/codeparrot-train` 데이터 셋에서 Data Science 관련 코드를 추출하고 split당 10개씩만 고른 데모용 데어터셋 - A demo dataset that extracts Data Science related code from the `transformersbook/codeparrot-train` dataset and picks only 10 pieces per split.
metamath/codeparrot-ds-raw-sm
[ "region:us" ]
2024-01-09T10:14:37+00:00
{"dataset_info": {"features": [{"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "copies", "dtype": "string"}, {"name": "size", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "license", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 60634, "num_examples": 10}, {"name": "valid", "num_bytes": 98567, "num_examples": 10}], "download_size": 86952, "dataset_size": 159201}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}]}]}
2024-01-09T10:16:30+00:00
[]
[]
TAGS #region-us
- 'transformersbook/codeparrot-train' 데이터 셋에서 Data Science 관련 코드를 추출하고 split당 10개씩만 고른 데모용 데어터셋 - A demo dataset that extracts Data Science related code from the 'transformersbook/codeparrot-train' dataset and picks only 10 pieces per split.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
b5a54568a3b72b2e1d7dff6f5143c2c1819613d4
# Dataset Card for "wmt22_w_shots_from_gptmt" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nunonmg/wmt22_w_shots_from_gptmt
[ "region:us" ]
2024-01-09T10:18:35+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "examples", "list": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}]}, {"name": "lp", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 26661051, "num_examples": 11986}], "download_size": 14342583, "dataset_size": 26661051}}
2024-01-09T10:18:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wmt22_w_shots_from_gptmt" More Information needed
[ "# Dataset Card for \"wmt22_w_shots_from_gptmt\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wmt22_w_shots_from_gptmt\"\n\nMore Information needed" ]
[ 6, 25 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"wmt22_w_shots_from_gptmt\"\n\nMore Information needed" ]
56a7efd0a87875d67d385c6095bb1e804dff891c
This dataset contains the Czech subset of the [`wikimedia/wikipedia`](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. Each page is divided into paragraphs, stored as a list in the `chunks` column. For every paragraph, embeddings are created using the [`intfloat/multilingual-e5-small`](https://huggingface.co/intfloat/multilingual-e5-small) model. ## Usage Load the dataset: ```python from datasets import load_dataset ds = load_dataset("karmiq/wikipedia-embeddings-cs-e5-small", split="train") ds[1] ``` ``` { 'id': '1', 'url': 'https://cs.wikipedia.org/wiki/Astronomie', 'title': 'Astronomie', 'chunks': [ 'Astronomie, řecky αστρονομία z άστρον ( astron ) hvězda a νόμος ( nomos )...', 'Myšlenky Aristotelovy rozvinul ve 2. století našeho letopočtu Klaudios Ptolemaios...', ..., ], 'embeddings': [ [0.09006806463003159, -0.009814552962779999, ...], [0.10767366737127304, ...], ... ] } ``` The structure makes it easy to use the dataset for implementing semantic search. <details> <summary>Load the data in Elasticsearch</summary> ```python def doc_generator(data, batch_size=1000): for batch in data.with_format("numpy").iter(batch_size): for i, id in enumerate(batch["id"]): output = {"id": id} output["title"] = batch["title"][i] output["url"] = batch["url"][i] output["parts"] = [ { "chunk": chunk, "embedding": embedding } for chunk, embedding in zip(batch["chunks"][i], batch["embeddings"][i]) ] yield output num_indexed, num_failed = 0, 0, progress = tqdm(total=ds.num_rows, unit="doc", desc="Indexing") for ok, info in parallel_bulk( es, index="wikipedia-search", actions=doc_generator(ds), raise_on_error=False, ): if not ok: print(f"ERROR {info['index']['status']}: " f"{info['index']['error']['type']}: {info['index']['error']['caused_by']['type']}: " f"{info['index']['error']['caused_by']['reason'][:250]}") progress.update(1) ``` </details> <details> <summary>Use <code>sentence_transformers.util.semantic_search</code></summary> ```python import sentence_transformers model = sentence_transformers.SentenceTransformer("intfloat/multilingual-e5-small") ds.set_format(type="torch", columns=["embeddings"], output_all_columns=True) # Flatten the dataset def explode_sequence(batch): output = { "id": [], "url": [], "title": [], "chunk": [], "embedding": [] } for id, url, title, chunks, embeddings in zip( batch["id"], batch["url"], batch["title"], batch["chunks"], batch["embeddings"] ): output["id"].extend([id for _ in range(len(chunks))]) output["url"].extend([url for _ in range(len(chunks))]) output["title"].extend([title for _ in range(len(chunks))]) output["chunk"].extend(chunks) output["embedding"].extend(embeddings) return output ds_flat = ds.map( explode_sequence, batched=True, remove_columns=ds.column_names, num_proc=min(os.cpu_count(), 32), desc="Flatten") ds_flat query = "Čím se zabývá fyzika?" hits = sentence_transformers.util.semantic_search( query_embeddings=model.encode(query), corpus_embeddings=ds_flat["embedding"], top_k=10) for hit in hits[0]: title = ds_flat[hit['corpus_id']]['title'] chunk = ds_flat[hit['corpus_id']]['chunk'] print(f"[{hit['score']:0.2f}] {textwrap.shorten(chunk, width=100, placeholder='…')} [{title}]") # [0.90] Fyzika částic ( též částicová fyzika ) je oblast fyziky, která se zabývá částicemi. V širším smyslu… [Fyzika částic] # [0.89] Fyzika ( z řeckého φυσικός ( fysikos ): přírodní, ze základu φύσις ( fysis ): příroda, archaicky… [Fyzika] # ... ``` </details> The embeddings generation took about 1 hour on an NVIDIA A100 80GB GPU. ## License See license of the original dataset: <https://huggingface.co/datasets/wikimedia/wikipedia>.
karmiq/wikipedia-embeddings-cs-e5-small
[ "task_categories:text-generation", "task_categories:fill-mask", "size_categories:100K<n<1M", "language:cs", "license:cc-by-sa-3.0", "license:gfdl", "region:us" ]
2024-01-09T10:39:36+00:00
{"language": ["cs"], "license": ["cc-by-sa-3.0", "gfdl"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "fill-mask"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "chunks", "sequence": "string"}, {"name": "embeddings", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 3302394852, "num_examples": 534044}], "download_size": 3029933751, "dataset_size": 3302394852}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-09T13:21:39+00:00
[]
[ "cs" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #size_categories-100K<n<1M #language-Czech #license-cc-by-sa-3.0 #license-gfdl #region-us
This dataset contains the Czech subset of the 'wikimedia/wikipedia' dataset. Each page is divided into paragraphs, stored as a list in the 'chunks' column. For every paragraph, embeddings are created using the 'intfloat/multilingual-e5-small' model. ## Usage Load the dataset: The structure makes it easy to use the dataset for implementing semantic search. <details> <summary>Load the data in Elasticsearch</summary> </details> <details> <summary>Use <code>sentence_transformers.util.semantic_search</code></summary> </details> The embeddings generation took about 1 hour on an NVIDIA A100 80GB GPU. ## License See license of the original dataset: <URL
[ "## Usage\n\nLoad the dataset:\n\n\n\n\n\nThe structure makes it easy to use the dataset for implementing semantic search.\n\n<details>\n<summary>Load the data in Elasticsearch</summary>\n\n\n</details>\n\n<details>\n<summary>Use <code>sentence_transformers.util.semantic_search</code></summary>\n\n\n</details>\n\nThe embeddings generation took about 1 hour on an NVIDIA A100 80GB GPU.", "## License\n\nSee license of the original dataset: <URL" ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #size_categories-100K<n<1M #language-Czech #license-cc-by-sa-3.0 #license-gfdl #region-us \n", "## Usage\n\nLoad the dataset:\n\n\n\n\n\nThe structure makes it easy to use the dataset for implementing semantic search.\n\n<details>\n<summary>Load the data in Elasticsearch</summary>\n\n\n</details>\n\n<details>\n<summary>Use <code>sentence_transformers.util.semantic_search</code></summary>\n\n\n</details>\n\nThe embeddings generation took about 1 hour on an NVIDIA A100 80GB GPU.", "## License\n\nSee license of the original dataset: <URL" ]
[ 64, 107, 12 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #size_categories-100K<n<1M #language-Czech #license-cc-by-sa-3.0 #license-gfdl #region-us \n## Usage\n\nLoad the dataset:\n\n\n\n\n\nThe structure makes it easy to use the dataset for implementing semantic search.\n\n<details>\n<summary>Load the data in Elasticsearch</summary>\n\n\n</details>\n\n<details>\n<summary>Use <code>sentence_transformers.util.semantic_search</code></summary>\n\n\n</details>\n\nThe embeddings generation took about 1 hour on an NVIDIA A100 80GB GPU.## License\n\nSee license of the original dataset: <URL" ]
68c02939c06d6850f40f2911e1d1d49bd760139d
SD-198 dataset contains 198 different diseases from different types of eczema, acne and various cancerous conditions. There are 6,584 images in total.
resyhgerwshshgdfghsdfgh/SD-198
[ "task_categories:image-classification", "license:other", "medical", "region:us" ]
2024-01-09T10:42:14+00:00
{"license": "other", "task_categories": ["image-classification"], "license_name": "unknown", "license_link": "https://xiaoxiaosun.com/docs/2016-eccv-sd198.pdf", "tags": ["medical"]}
2024-01-09T11:09:11+00:00
[]
[]
TAGS #task_categories-image-classification #license-other #medical #region-us
SD-198 dataset contains 198 different diseases from different types of eczema, acne and various cancerous conditions. There are 6,584 images in total.
[]
[ "TAGS\n#task_categories-image-classification #license-other #medical #region-us \n" ]
[ 25 ]
[ "passage: TAGS\n#task_categories-image-classification #license-other #medical #region-us \n" ]
83ae61cdf8ab600a7e6fd9bef6ae6d2a002cf9c8
# Dataset Card for "spider-bird" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
PipableAI/pip-txt-to-sql-spider-bird-dataset
[ "region:us" ]
2024-01-09T10:42:33+00:00
{"dataset_info": {"features": [{"name": "schema", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "query", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24825271, "num_examples": 16428}], "download_size": 1320142, "dataset_size": 24825271}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-09T10:45:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "spider-bird" More Information needed
[ "# Dataset Card for \"spider-bird\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"spider-bird\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"spider-bird\"\n\nMore Information needed" ]
c763036ac170c96dd9d9adaa5f21ba51b1b4c57e
# PyLaia RIMES Dataset ## Table of Contents - [PyLaia RIMES Dataset](#pylaia-rimes-dataset) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) ## Dataset Description - **Homepage:** [ARTEMIS](https://artemis.telecom-sudparis.eu/2012/10/05/rimes/) - **PapersWithCode:** [Papers using the RIMES dataset](https://paperswithcode.com/dataset/rimes) - **Point of Contact:** [TEKLIA](https://teklia.com) ## Dataset Summary The RIMES database (Recognition and Indexation of handwritten documents and faxes) was created to evaluate automatic recognition and indexing systems for handwritten letters. ### Languages All the documents in the dataset are written in French. ## Dataset Structure ### Data Instances ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2560x128 at 0x1A800E8E190, 'text': 'Comme indiqué dans les conditions particulières de mon contrat d'assurance' } ``` ### Data Fields - `image`: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0]. - `text`: the label transcription of the image.
Teklia/rimes-2011-lines
[ "task_categories:image-to-text", "language:fr", "license:mit", "region:us" ]
2024-01-09T10:45:47+00:00
{"language": ["fr"], "license": "mit", "task_categories": ["image-to-text"], "pretty_name": "PyLaia RIMES", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 10188}, {"name": "validation", "num_examples": 1138}, {"name": "test", "num_examples": 778}], "dataset_size": 12104}}
2024-01-26T14:40:35+00:00
[]
[ "fr" ]
TAGS #task_categories-image-to-text #language-French #license-mit #region-us
# PyLaia RIMES Dataset ## Table of Contents - PyLaia RIMES Dataset - Table of Contents - Dataset Description - Languages - Dataset Structure - Data Instances - Data Fields ## Dataset Description - Homepage: ARTEMIS - PapersWithCode: Papers using the RIMES dataset - Point of Contact: TEKLIA ## Dataset Summary The RIMES database (Recognition and Indexation of handwritten documents and faxes) was created to evaluate automatic recognition and indexing systems for handwritten letters. ### Languages All the documents in the dataset are written in French. ## Dataset Structure ### Data Instances ### Data Fields - 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0]. - 'text': the label transcription of the image.
[ "# PyLaia RIMES Dataset", "## Table of Contents\n- PyLaia RIMES Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields", "## Dataset Description\n\n- Homepage: ARTEMIS\n- PapersWithCode: Papers using the RIMES dataset\n- Point of Contact: TEKLIA", "## Dataset Summary \n\nThe RIMES database (Recognition and Indexation of handwritten documents and faxes) was created to evaluate automatic recognition and indexing systems for handwritten letters.", "### Languages\n\nAll the documents in the dataset are written in French.", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image." ]
[ "TAGS\n#task_categories-image-to-text #language-French #license-mit #region-us \n", "# PyLaia RIMES Dataset", "## Table of Contents\n- PyLaia RIMES Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields", "## Dataset Description\n\n- Homepage: ARTEMIS\n- PapersWithCode: Papers using the RIMES dataset\n- Point of Contact: TEKLIA", "## Dataset Summary \n\nThe RIMES database (Recognition and Indexation of handwritten documents and faxes) was created to evaluate automatic recognition and indexing systems for handwritten letters.", "### Languages\n\nAll the documents in the dataset are written in French.", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image." ]
[ 29, 8, 40, 31, 42, 16, 6, 6, 133 ]
[ "passage: TAGS\n#task_categories-image-to-text #language-French #license-mit #region-us \n# PyLaia RIMES Dataset## Table of Contents\n- PyLaia RIMES Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields## Dataset Description\n\n- Homepage: ARTEMIS\n- PapersWithCode: Papers using the RIMES dataset\n- Point of Contact: TEKLIA## Dataset Summary \n\nThe RIMES database (Recognition and Indexation of handwritten documents and faxes) was created to evaluate automatic recognition and indexing systems for handwritten letters.### Languages\n\nAll the documents in the dataset are written in French.## Dataset Structure### Data Instances### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image." ]
e1caa77c96af7ae182fb19e831cff798a98785bf
# ChEBI-20-MM Dataset ## Overview The ChEBI-20-MM is an extensive and multi-modal benchmark developed from the ChEBI-20 dataset. It is designed to provide a comprehensive benchmark for evaluating various models' capabilities in the field of molecular science. This benchmark integrates multi-modal data, including InChI, IUPAC, SELFIES, and images, making it a versatile tool for a wide range of molecular tasks. ## Dataset Description ChEBI-20-MM is an expansion of the original ChEBI-20 dataset, with a focus on incorporating diverse modalities of molecular data. This benchmark is tailored to assess models in several key areas: - **Molecule Generation**: Evaluating the ability of models to generate accurate molecular structures. - **Image Recognition**: Testing models on their proficiency in converting molecular images into other representational formats. - **IUPAC Recognition**: Evaluating the ability of models to generate IUPAC names from other representational formats. - **Molecular Captioning**: Assessing the capability of models to generate descriptive captions for molecular structures. - **Retrieval Tasks**: Measuring the effectiveness of models in retrieving molecular information accurately and efficiently. ## Utility and Significance By expanding the data modality variety, this benchmark enables a more comprehensive evaluation of models' performance in multi-modal data handling. ## How to Use Model reviews and evaluations related to this dataset can be directly accessed and used via the LLM4Mol link: [LLM4Mol](https://github.com/AI-HPC-Research-Team/LLM4Mol). ## Data Visualization We employ visualization techniques to analyze the **suitability** of data sources for language models and **chemical space coverage**. The figure below illustrates our use of different visualization methods to analyze text length distributions and token counts generated by each model's tokenizer across various text data types. This approach evaluates the adaptability of language models to the textual characteristics of our dataset. ![Data Visualization](data_visualization.png) We also focus on the top 10 scaffolds within the dataset, counting the number of molecules for each scaffold. Here, semi-transparent bars represent the total count, while solid bars indicate the quantity in the training set. On the other hand, for the analysis of \textbf{chemical space coverage}, we choose molecular weight (MW), LogP, the number of aromatic rings, and the Topological Polar Surface Area (TPSA) as descriptors. We examine the distribution and correlation of these descriptors within the dataset, providing insights into the chemical diversity and complexity present in our data. ## Acknowledgments The development of the ChEBI-20-MM dataset was inspired by the ChEBI-20 in molecule generation and captioning initiated by MolT5. Additional data information supplements are derived from PubChem.
liupf/ChEBI-20-MM
[ "task_categories:text-generation", "task_categories:translation", "license:mit", "chemistry", "biology", "region:us" ]
2024-01-09T10:49:20+00:00
{"license": "mit", "task_categories": ["text-generation", "translation"], "tags": ["chemistry", "biology"]}
2024-01-10T01:59:43+00:00
[]
[]
TAGS #task_categories-text-generation #task_categories-translation #license-mit #chemistry #biology #region-us
# ChEBI-20-MM Dataset ## Overview The ChEBI-20-MM is an extensive and multi-modal benchmark developed from the ChEBI-20 dataset. It is designed to provide a comprehensive benchmark for evaluating various models' capabilities in the field of molecular science. This benchmark integrates multi-modal data, including InChI, IUPAC, SELFIES, and images, making it a versatile tool for a wide range of molecular tasks. ## Dataset Description ChEBI-20-MM is an expansion of the original ChEBI-20 dataset, with a focus on incorporating diverse modalities of molecular data. This benchmark is tailored to assess models in several key areas: - Molecule Generation: Evaluating the ability of models to generate accurate molecular structures. - Image Recognition: Testing models on their proficiency in converting molecular images into other representational formats. - IUPAC Recognition: Evaluating the ability of models to generate IUPAC names from other representational formats. - Molecular Captioning: Assessing the capability of models to generate descriptive captions for molecular structures. - Retrieval Tasks: Measuring the effectiveness of models in retrieving molecular information accurately and efficiently. ## Utility and Significance By expanding the data modality variety, this benchmark enables a more comprehensive evaluation of models' performance in multi-modal data handling. ## How to Use Model reviews and evaluations related to this dataset can be directly accessed and used via the LLM4Mol link: LLM4Mol. ## Data Visualization We employ visualization techniques to analyze the suitability of data sources for language models and chemical space coverage. The figure below illustrates our use of different visualization methods to analyze text length distributions and token counts generated by each model's tokenizer across various text data types. This approach evaluates the adaptability of language models to the textual characteristics of our dataset. !Data Visualization We also focus on the top 10 scaffolds within the dataset, counting the number of molecules for each scaffold. Here, semi-transparent bars represent the total count, while solid bars indicate the quantity in the training set. On the other hand, for the analysis of \textbf{chemical space coverage}, we choose molecular weight (MW), LogP, the number of aromatic rings, and the Topological Polar Surface Area (TPSA) as descriptors. We examine the distribution and correlation of these descriptors within the dataset, providing insights into the chemical diversity and complexity present in our data. ## Acknowledgments The development of the ChEBI-20-MM dataset was inspired by the ChEBI-20 in molecule generation and captioning initiated by MolT5. Additional data information supplements are derived from PubChem.
[ "# ChEBI-20-MM Dataset", "## Overview\n\nThe ChEBI-20-MM is an extensive and multi-modal benchmark developed from the ChEBI-20 dataset. It is designed to provide a comprehensive benchmark for evaluating various models' capabilities in the field of molecular science. This benchmark integrates multi-modal data, including InChI, IUPAC, SELFIES, and images, making it a versatile tool for a wide range of molecular tasks.", "## Dataset Description\n\nChEBI-20-MM is an expansion of the original ChEBI-20 dataset, with a focus on incorporating diverse modalities of molecular data. This benchmark is tailored to assess models in several key areas:\n\n- Molecule Generation: Evaluating the ability of models to generate accurate molecular structures.\n- Image Recognition: Testing models on their proficiency in converting molecular images into other representational formats.\n- IUPAC Recognition: Evaluating the ability of models to generate IUPAC names from other representational formats.\n- Molecular Captioning: Assessing the capability of models to generate descriptive captions for molecular structures.\n- Retrieval Tasks: Measuring the effectiveness of models in retrieving molecular information accurately and efficiently.", "## Utility and Significance\n\nBy expanding the data modality variety, this benchmark enables a more comprehensive evaluation of models' performance in multi-modal data handling.", "## How to Use\n\nModel reviews and evaluations related to this dataset can be directly accessed and used via the LLM4Mol link: LLM4Mol.", "## Data Visualization\n\nWe employ visualization techniques to analyze the suitability of data sources for language models and chemical space coverage. The figure below illustrates our use of different visualization methods to analyze text length distributions and token counts generated by each model's tokenizer across various text data types. This approach evaluates the adaptability of language models to the textual characteristics of our dataset.\n\n!Data Visualization\n\nWe also focus on the top 10 scaffolds within the dataset, counting the number of molecules for each scaffold. Here, semi-transparent bars represent the total count, while solid bars indicate the quantity in the training set. On the other hand, for the analysis of \\textbf{chemical space coverage}, we choose molecular weight (MW), LogP, the number of aromatic rings, and the Topological Polar Surface Area (TPSA) as descriptors. We examine the distribution and correlation of these descriptors within the dataset, providing insights into the chemical diversity and complexity present in our data.", "## Acknowledgments\n\nThe development of the ChEBI-20-MM dataset was inspired by the ChEBI-20 in molecule generation and captioning initiated by MolT5. Additional data information supplements are derived from PubChem." ]
[ "TAGS\n#task_categories-text-generation #task_categories-translation #license-mit #chemistry #biology #region-us \n", "# ChEBI-20-MM Dataset", "## Overview\n\nThe ChEBI-20-MM is an extensive and multi-modal benchmark developed from the ChEBI-20 dataset. It is designed to provide a comprehensive benchmark for evaluating various models' capabilities in the field of molecular science. This benchmark integrates multi-modal data, including InChI, IUPAC, SELFIES, and images, making it a versatile tool for a wide range of molecular tasks.", "## Dataset Description\n\nChEBI-20-MM is an expansion of the original ChEBI-20 dataset, with a focus on incorporating diverse modalities of molecular data. This benchmark is tailored to assess models in several key areas:\n\n- Molecule Generation: Evaluating the ability of models to generate accurate molecular structures.\n- Image Recognition: Testing models on their proficiency in converting molecular images into other representational formats.\n- IUPAC Recognition: Evaluating the ability of models to generate IUPAC names from other representational formats.\n- Molecular Captioning: Assessing the capability of models to generate descriptive captions for molecular structures.\n- Retrieval Tasks: Measuring the effectiveness of models in retrieving molecular information accurately and efficiently.", "## Utility and Significance\n\nBy expanding the data modality variety, this benchmark enables a more comprehensive evaluation of models' performance in multi-modal data handling.", "## How to Use\n\nModel reviews and evaluations related to this dataset can be directly accessed and used via the LLM4Mol link: LLM4Mol.", "## Data Visualization\n\nWe employ visualization techniques to analyze the suitability of data sources for language models and chemical space coverage. The figure below illustrates our use of different visualization methods to analyze text length distributions and token counts generated by each model's tokenizer across various text data types. This approach evaluates the adaptability of language models to the textual characteristics of our dataset.\n\n!Data Visualization\n\nWe also focus on the top 10 scaffolds within the dataset, counting the number of molecules for each scaffold. Here, semi-transparent bars represent the total count, while solid bars indicate the quantity in the training set. On the other hand, for the analysis of \\textbf{chemical space coverage}, we choose molecular weight (MW), LogP, the number of aromatic rings, and the Topological Polar Surface Area (TPSA) as descriptors. We examine the distribution and correlation of these descriptors within the dataset, providing insights into the chemical diversity and complexity present in our data.", "## Acknowledgments\n\nThe development of the ChEBI-20-MM dataset was inspired by the ChEBI-20 in molecule generation and captioning initiated by MolT5. Additional data information supplements are derived from PubChem." ]
[ 38, 9, 95, 177, 36, 36, 236, 55 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-translation #license-mit #chemistry #biology #region-us \n# ChEBI-20-MM Dataset## Overview\n\nThe ChEBI-20-MM is an extensive and multi-modal benchmark developed from the ChEBI-20 dataset. It is designed to provide a comprehensive benchmark for evaluating various models' capabilities in the field of molecular science. This benchmark integrates multi-modal data, including InChI, IUPAC, SELFIES, and images, making it a versatile tool for a wide range of molecular tasks.## Dataset Description\n\nChEBI-20-MM is an expansion of the original ChEBI-20 dataset, with a focus on incorporating diverse modalities of molecular data. This benchmark is tailored to assess models in several key areas:\n\n- Molecule Generation: Evaluating the ability of models to generate accurate molecular structures.\n- Image Recognition: Testing models on their proficiency in converting molecular images into other representational formats.\n- IUPAC Recognition: Evaluating the ability of models to generate IUPAC names from other representational formats.\n- Molecular Captioning: Assessing the capability of models to generate descriptive captions for molecular structures.\n- Retrieval Tasks: Measuring the effectiveness of models in retrieving molecular information accurately and efficiently.## Utility and Significance\n\nBy expanding the data modality variety, this benchmark enables a more comprehensive evaluation of models' performance in multi-modal data handling.## How to Use\n\nModel reviews and evaluations related to this dataset can be directly accessed and used via the LLM4Mol link: LLM4Mol." ]
a81bbc4c55660d277237eef26e00e5cbdad40cdd
# Dataset Card for "SciEntsBank" SciEntsBank is one of the two distinct subsets within the Student Response Analysis (SRA) corpus, the other subset being the Beetle dataset. Derived from student answers gathered by Nielsen et al. [1], this dataset comprises nearly 11K responses to 197 assessment questions spanning 15 diverse science domains. The dataset features three labeling schemes: (a) 5-way, (b) 3-way, and (c) 2-way. The dataset includes a training set and three distinct test sets: (a) Unseen Answers (`test_ua`), (b) Unseen Questions (`test_uq`), and (c) Unseen Domains (`test_ud`). - **Authors:** Myroslava Dzikovska, Rodney Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Bentivogli, Peter Clark, Ido Dagan, Hoa Trang Dang - **Paper:** [SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge](https://aclanthology.org/S13-2045) # Loading Dataset ```python from datasets import load_dataset ds = load_dataset('nkazi/scientsbank') ``` # Labeling Schemes The authors released the dataset with annotations using five labels (i.e., 5-way labeling scheme) for Automated Short-Answer Grading (ASAG). Additionally, the authors have introduced two alternative labeling schemes, namely the 3-way and 2-way schemes, both derived from the 5-way labeling scheme designed for Recognizing Textual Entailment (RTE). In the 3-way labeling scheme, the categories "partially correct but incomplete", "irrelevant", and "non-domain" are consolidated into a unified category labeled as "incorrect". On the other hand, the 2-way labeling scheme simplifies the classification into a binary system where all labels except "correct" are merged under the "incorrect" category. This repository provides access to the dataset annotated with the 5-way labeling scheme. If needed, the code provided below can be utilized to convert labels from 5-way to either 3-way or 2-way labels. ## 5-way to 3-way ```python ds = ds.align_labels_with_mapping({'correct': 0, 'contradictory': 1, 'partially_correct_incomplete': 2, 'irrelevant': 2, 'non_domain': 2}, 'label') ds = ds.cast_column('label', ClassLabel(names=['correct', 'contradictory', 'incorrect'])) ``` Using `align_labels_with_mapping()`, we are mapping "partially correct but incomplete", "irrelevant", and "non-domain" to the same id. Subsequently, we are using `cast_column()` to redefine the class labels (i.e., the label feature) where the id 2 corresponds to the "incorrect" label. ## 5-way to 2-way ```python ds = ds.align_labels_with_mapping({'correct': 0, 'contradictory': 1, 'partially_correct_incomplete': 1, 'irrelevant': 1, 'non_domain': 1}, 'label') ds = ds.cast_column('label', ClassLabel(names=['correct', 'incorrect'])) ``` In the above code, the label "correct" is mapped to 0 to maintain consistency with both the 5-way and 3-way labeling schemes. If the preference is to represent "correct" with id 1 and "incorrect" with id 0, either adjust the label map accordingly or run the following to switch the ids: ```python ds = ds.align_labels_with_mapping({'incorrect': 0, 'correct': 1}, 'label') ``` ## Saving dataset Use the following code to store the dataset with the 3-way (or 2-way) labeling scheme locally to eliminate the need to convert labels each time the dataset is loaded: ```python ds.save_to_disk('SciEntsBank_3way') ``` Here, `SciEntsBank_3way` depicts the path/directory where the dataset will be stored. Use the following code to load the dataset from the same local directory/path: ```python ds = load_dataset('SciEntsBank_3way') ``` ## Printing Label Distribution It is advisable to verify the label distribution post-conversion from the 5-way labeling scheme to either the 3-way or 2-way labeling scheme. Use the following code to print the label distribution: ```python def print_label_dist(dataset): for split_name in dataset: print(split_name, ':') for label in dataset[split_name].features['label'].names: print(' ', label, ':', dataset[split_name]['label'].count(dataset[split_name].features['label'].str2int(label))) print_label_dist(ds) ``` # Label Distribution <style> .label-dist th:not(:first-child), .label-dist td:not(:first-child) { width: 15%; } </style> <div class="label-dist"> ## 5-way Label | Train | Test UA | Test UQ | Test UD --- | --: | --: | --: | --: Correct | 2,008 | 233 | 301 | 1,917 Contradictory | 499 | 58 | 64 | 417 Partially correct but incomplete | 1,324 | 113 | 175 | 986 Irrelevant | 1,115 | 133 | 193 | 1,222 Non-domain | 23 | 3 | - | 20 Total | 4,969 | 540 | 733 | 4,562 ## 3-way Label | Train | Test UA | Test UQ | Test UD --- | --: | --: | --: | --: Correct | 2,008 | 233 | 301 | 1,917 Contradictory | 499 | 58 | 64 | 417 Incorrect | 2,462 | 249 | 368 | 2,228 Total | 4,969 | 540 | 733 | 4,562 ## 2-way Label | Train | Test UA | Test UQ | Test UD --- | --: | --: | --: | --: Correct | 2,008 | 233 | 301 | 1,917 Incorrect | 2,961 | 307 | 432 | 2,645 Total | 4,969 | 540 | 733 | 4,562 </div> # Citation ```tex @inproceedings{dzikovska2013semeval, title = {{S}em{E}val-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge}, author = {Dzikovska, Myroslava and Nielsen, Rodney and Brew, Chris and Leacock, Claudia and Giampiccolo, Danilo and Bentivogli, Luisa and Clark, Peter and Dagan, Ido and Dang, Hoa Trang}, year = 2013, month = jun, booktitle = {Second Joint Conference on Lexical and Computational Semantics ({SEM}), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation ({S}em{E}val 2013)}, editor = {Manandhar, Suresh and Yuret, Deniz} publisher = {Association for Computational Linguistics}, address = {Atlanta, Georgia, USA}, pages = {263--274}, url = {https://aclanthology.org/S13-2045}, } ``` # References 1. Rodney D. Nielsen, Wayne Ward, James H. Martin, and Martha Palmer. 2008. Annotating students' understanding of science concepts. In Proceedings of the Sixth International Language Resources and Evaluation Conference, Marrakech, Morocco.
nkazi/SciEntsBank
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "region:us" ]
2024-01-09T10:52:26+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "pretty_name": "SciEntsBank", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "reference_answer", "dtype": "string"}, {"name": "student_answer", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "correct", "1": "contradictory", "2": "partially_correct_incomplete", "3": "irrelevant", "4": "non_domain"}}}}], "splits": [{"name": "train", "num_bytes": 2179970, "num_examples": 4969}, {"name": "test_ua", "num_bytes": 236393, "num_examples": 540}, {"name": "test_uq", "num_bytes": 304225, "num_examples": 733}, {"name": "test_ud", "num_bytes": 1745607, "num_examples": 4562}], "download_size": 498408, "dataset_size": 4466195}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test_ua", "path": "data/test_ua-*"}, {"split": "test_uq", "path": "data/test_uq-*"}, {"split": "test_ud", "path": "data/test_ud-*"}]}]}
2024-01-10T03:56:32+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us
Dataset Card for "SciEntsBank" ============================== SciEntsBank is one of the two distinct subsets within the Student Response Analysis (SRA) corpus, the other subset being the Beetle dataset. Derived from student answers gathered by Nielsen et al. [1], this dataset comprises nearly 11K responses to 197 assessment questions spanning 15 diverse science domains. The dataset features three labeling schemes: (a) 5-way, (b) 3-way, and (c) 2-way. The dataset includes a training set and three distinct test sets: (a) Unseen Answers ('test\_ua'), (b) Unseen Questions ('test\_uq'), and (c) Unseen Domains ('test\_ud'). * Authors: Myroslava Dzikovska, Rodney Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Bentivogli, Peter Clark, Ido Dagan, Hoa Trang Dang * Paper: SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge Loading Dataset =============== Labeling Schemes ================ The authors released the dataset with annotations using five labels (i.e., 5-way labeling scheme) for Automated Short-Answer Grading (ASAG). Additionally, the authors have introduced two alternative labeling schemes, namely the 3-way and 2-way schemes, both derived from the 5-way labeling scheme designed for Recognizing Textual Entailment (RTE). In the 3-way labeling scheme, the categories "partially correct but incomplete", "irrelevant", and "non-domain" are consolidated into a unified category labeled as "incorrect". On the other hand, the 2-way labeling scheme simplifies the classification into a binary system where all labels except "correct" are merged under the "incorrect" category. This repository provides access to the dataset annotated with the 5-way labeling scheme. If needed, the code provided below can be utilized to convert labels from 5-way to either 3-way or 2-way labels. 5-way to 3-way -------------- Using 'align\_labels\_with\_mapping()', we are mapping "partially correct but incomplete", "irrelevant", and "non-domain" to the same id. Subsequently, we are using 'cast\_column()' to redefine the class labels (i.e., the label feature) where the id 2 corresponds to the "incorrect" label. 5-way to 2-way -------------- In the above code, the label "correct" is mapped to 0 to maintain consistency with both the 5-way and 3-way labeling schemes. If the preference is to represent "correct" with id 1 and "incorrect" with id 0, either adjust the label map accordingly or run the following to switch the ids: Saving dataset -------------- Use the following code to store the dataset with the 3-way (or 2-way) labeling scheme locally to eliminate the need to convert labels each time the dataset is loaded: Here, 'SciEntsBank\_3way' depicts the path/directory where the dataset will be stored. Use the following code to load the dataset from the same local directory/path: Printing Label Distribution --------------------------- It is advisable to verify the label distribution post-conversion from the 5-way labeling scheme to either the 3-way or 2-way labeling scheme. Use the following code to print the label distribution: Label Distribution ================== .label-dist th:not(:first-child), .label-dist td:not(:first-child) { width: 15%; } 5-way ----- 3-way ----- 2-way ----- References ========== 1. Rodney D. Nielsen, Wayne Ward, James H. Martin, and Martha Palmer. 2008. Annotating students' understanding of science concepts. In Proceedings of the Sixth International Language Resources and Evaluation Conference, Marrakech, Morocco.
[]
[ "TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us \n" ]
[ 42 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us \n" ]
5f4b8928a27d5ff6cb5f06ffbe075dfc07402fcd
# Dataset Card for "mini-imdb" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ignacioct/mini-imdb
[ "region:us" ]
2024-01-09T11:12:29+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "hola"}}}}], "splits": [{"name": "train", "num_bytes": 3704, "num_examples": 3}], "download_size": 14002, "dataset_size": 3704}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-09T11:12:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for "mini-imdb" More Information needed
[ "# Dataset Card for \"mini-imdb\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"mini-imdb\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"mini-imdb\"\n\nMore Information needed" ]
252227592e1dd5a0f92ef99040851de8eee1c3ea
# Dataset Card for Rejection Sampling Phi-2 with OpenAssistant RM ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Point of Contact:** [Alizée Pace](mailto:[email protected]) ### Dataset Summary The "Rejection Sampling Phi-2 with OpenAssistant RM" dataset consists of 10 pairs of prompts and responses, which were generated using rejection sampling over 10 Phi-2 generation using the OpenAssistant Reward Model. ### Supported Tasks and Leaderboards The dataset and its creation rationale could be used to support models for question-answering, text-generation, or conversational agents. ### Languages The BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data. ## Dataset Structure ### Data Instances For each instance, there is a list of strings for the model prompt, a string for the best generation, and a float for the reward model score. ``` {'prompt': ['Write a story about a lion that lives in the city and falls in love with a pigeon.'], 'generation': 'Instruction: Write a story about a lion that lives in the city and falls in love with a pigeon.\nOutput: Leo was a lion who lived in the city. He hated the noise and the pollution. He longed for the freedom and the beauty of the savanna. He spent his days wandering around the streets, looking for prey or a place to nap.\n\nOne day, he spotted a pigeon sitting on a ledge. He was hungry and decided to try to catch it. He roared loudly, hoping to scare the bird away. But the pigeon was not afraid. It flew up and landed on his nose.\n\nLeo was surprised and annoyed. He puffed up his mane and tried to shake the pigeon off. But the pigeon was persistent. It pecked at his nose and made jokes.\n\nLeo was irritated and bored. He decided to play along. He rubbed his nose and made funny faces. The pigeon laughed and joined in. They became friends and had fun together.\n\nLeo realized that the pigeon was not so bad. It was actually quite smart and funny. It taught him new words and tricks. It showed him new sights and sounds. It made him forget about his problems and enjoy the moment.\n\nLeo fell in love with the pigeon. He wanted to be with it forever. He asked it to leave the city and follow him to the savanna. The pigeon agreed. It said it would always be his friend and his mate.\n\nLeo and the pigeon left the city and embarked on a new adventure. They found happiness and peace in the wild. They lived happily ever after.\n', 'rm_score': 3.1899418831} ``` ### Data Fields - `prompt`: a list of strings containing the prompts used to query Phi-2, with each string representing a turn. - `generation`: a string containing the best-scored Phi-2 generation out of 10, including the formatted prompt. - `rm_score`: a float corresponding to the OpenAssistant reward score assigned to the generation. ### Data Splits The dataset is not divided into splits. ## Dataset Creation ### Curation Rationale [N/A] ### Source Data #### Initial Data Collection and Normalization Prompts were written or selected by a human to capture a range of instruction-following abilities (creativity, factuality, reasoning, etc.). Generations were produced for each prompt using the [Phi-2](https://huggingface.co/microsoft/phi-2) language model. Following the Phi-2 QA and chat prompting instructions, prompts were formatted with the following rationale: * single-turn queries: `"Instruction: {query}\nOutput: "` * multi-turn queries: `"Alice: {query_turn_1}\nBob: {query_turn_2}...\nAlice: {query_turn_n}\nBob: "` Only the top-scored out of 10 generations were retained for this dataset; see [Annotations](#annotations) for details on the scoring procedure. #### Who are the source language producers? Two out of the 10 prompts were taken from the [GSM8K](https://huggingface.co/datasets/gsm8k) dataset to probe reasoning ability. The rest were generated by a human writer. Generations were produced for each prompt using the [Phi-2](https://huggingface.co/microsoft/phi-2) language model. ### Annotations #### Annotation process All generations are scored using an external reward model. #### Who are the annotators? Reward scores were produced for each generation using the [Open Assistant reward model](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2). ### Personal and Sensitive Information No personal and sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset Some prompts are deliberately designed to probe models' ability to generate or reward safe and harmless text. Prompts and generation may therefore contain offensive or toxic language. ### Discussion of Biases [N/A] ### Other Known Limitations The limited size of the dataset does not allow to thoroughly explore all aspects of model instruction-following ability. ## Additional Information ### Dataset Curators The dataset was collected by Alizée Pace. ### Licensing Information The dataset is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0). ### Citation Information [N/A] ### Contributions Thanks to [@alizeepace](https://github.com/alizeepace) for adding this dataset.
alizeepace/rejection_sampling_phi_2_OA_rm
[ "task_categories:question-answering", "task_categories:text-generation", "task_categories:conversational", "annotations_creators:machine-generated", "language_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:GSM8K", "language:en", "license:apache-2.0", "region:us" ]
2024-01-09T11:49:03+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["expert-generated", "machine-generated"], "language": ["en"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["GSM8K"], "task_categories": ["question-answering", "text-generation", "conversational"], "pretty_name": "Rejection Sampling Phi-2 with OpenAssistant RM"}
2024-01-10T06:55:58+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #task_categories-text-generation #task_categories-conversational #annotations_creators-machine-generated #language_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-GSM8K #language-English #license-apache-2.0 #region-us
# Dataset Card for Rejection Sampling Phi-2 with OpenAssistant RM ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Point of Contact: Alizée Pace ### Dataset Summary The "Rejection Sampling Phi-2 with OpenAssistant RM" dataset consists of 10 pairs of prompts and responses, which were generated using rejection sampling over 10 Phi-2 generation using the OpenAssistant Reward Model. ### Supported Tasks and Leaderboards The dataset and its creation rationale could be used to support models for question-answering, text-generation, or conversational agents. ### Languages The BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data. ## Dataset Structure ### Data Instances For each instance, there is a list of strings for the model prompt, a string for the best generation, and a float for the reward model score. ### Data Fields - 'prompt': a list of strings containing the prompts used to query Phi-2, with each string representing a turn. - 'generation': a string containing the best-scored Phi-2 generation out of 10, including the formatted prompt. - 'rm_score': a float corresponding to the OpenAssistant reward score assigned to the generation. ### Data Splits The dataset is not divided into splits. ## Dataset Creation ### Curation Rationale [N/A] ### Source Data #### Initial Data Collection and Normalization Prompts were written or selected by a human to capture a range of instruction-following abilities (creativity, factuality, reasoning, etc.). Generations were produced for each prompt using the Phi-2 language model. Following the Phi-2 QA and chat prompting instructions, prompts were formatted with the following rationale: * single-turn queries: '"Instruction: {query}\nOutput: "' * multi-turn queries: '"Alice: {query_turn_1}\nBob: {query_turn_2}...\nAlice: {query_turn_n}\nBob: "' Only the top-scored out of 10 generations were retained for this dataset; see Annotations for details on the scoring procedure. #### Who are the source language producers? Two out of the 10 prompts were taken from the GSM8K dataset to probe reasoning ability. The rest were generated by a human writer. Generations were produced for each prompt using the Phi-2 language model. ### Annotations #### Annotation process All generations are scored using an external reward model. #### Who are the annotators? Reward scores were produced for each generation using the Open Assistant reward model. ### Personal and Sensitive Information No personal and sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset Some prompts are deliberately designed to probe models' ability to generate or reward safe and harmless text. Prompts and generation may therefore contain offensive or toxic language. ### Discussion of Biases [N/A] ### Other Known Limitations The limited size of the dataset does not allow to thoroughly explore all aspects of model instruction-following ability. ## Additional Information ### Dataset Curators The dataset was collected by Alizée Pace. ### Licensing Information The dataset is released under the Apache-2.0 License. [N/A] ### Contributions Thanks to @alizeepace for adding this dataset.
[ "# Dataset Card for Rejection Sampling Phi-2 with OpenAssistant RM", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Point of Contact: Alizée Pace", "### Dataset Summary\n\nThe \"Rejection Sampling Phi-2 with OpenAssistant RM\" dataset consists of 10 pairs of prompts and responses, which were generated using rejection sampling over 10 Phi-2 generation using the OpenAssistant Reward Model.", "### Supported Tasks and Leaderboards\n\nThe dataset and its creation rationale could be used to support models for question-answering, text-generation, or conversational agents.", "### Languages\n\nThe BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.", "## Dataset Structure", "### Data Instances\n\nFor each instance, there is a list of strings for the model prompt, a string for the best generation, and a float for the reward model score.", "### Data Fields\n\n- 'prompt': a list of strings containing the prompts used to query Phi-2, with each string representing a turn.\n- 'generation': a string containing the best-scored Phi-2 generation out of 10, including the formatted prompt.\n- 'rm_score': a float corresponding to the OpenAssistant reward score assigned to the generation.", "### Data Splits\n\nThe dataset is not divided into splits.", "## Dataset Creation", "### Curation Rationale\n\n[N/A]", "### Source Data", "#### Initial Data Collection and Normalization\n\nPrompts were written or selected by a human to capture a range of instruction-following abilities (creativity, factuality, reasoning, etc.).\n\nGenerations were produced for each prompt using the Phi-2 language model. Following the Phi-2 QA and chat prompting instructions, prompts were formatted with the following rationale:\n* single-turn queries: '\"Instruction: {query}\\nOutput: \"'\n* multi-turn queries: '\"Alice: {query_turn_1}\\nBob: {query_turn_2}...\\nAlice: {query_turn_n}\\nBob: \"'\n\nOnly the top-scored out of 10 generations were retained for this dataset; see Annotations for details on the scoring procedure.", "#### Who are the source language producers?\n\nTwo out of the 10 prompts were taken from the GSM8K dataset to probe reasoning ability. The rest were generated by a human writer.\n\nGenerations were produced for each prompt using the Phi-2 language model.", "### Annotations", "#### Annotation process\n\nAll generations are scored using an external reward model.", "#### Who are the annotators?\n\nReward scores were produced for each generation using the Open Assistant reward model.", "### Personal and Sensitive Information\n\nNo personal and sensitive information.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nSome prompts are deliberately designed to probe models' ability to generate or reward safe and harmless text. Prompts and generation may therefore contain offensive or toxic language.", "### Discussion of Biases\n\n[N/A]", "### Other Known Limitations\n\nThe limited size of the dataset does not allow to thoroughly explore all aspects of model instruction-following ability.", "## Additional Information", "### Dataset Curators\n\nThe dataset was collected by Alizée Pace.", "### Licensing Information\n\nThe dataset is released under the Apache-2.0 License. \n\n\n\n[N/A]", "### Contributions\n\nThanks to @alizeepace for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_categories-text-generation #task_categories-conversational #annotations_creators-machine-generated #language_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-GSM8K #language-English #license-apache-2.0 #region-us \n", "# Dataset Card for Rejection Sampling Phi-2 with OpenAssistant RM", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Point of Contact: Alizée Pace", "### Dataset Summary\n\nThe \"Rejection Sampling Phi-2 with OpenAssistant RM\" dataset consists of 10 pairs of prompts and responses, which were generated using rejection sampling over 10 Phi-2 generation using the OpenAssistant Reward Model.", "### Supported Tasks and Leaderboards\n\nThe dataset and its creation rationale could be used to support models for question-answering, text-generation, or conversational agents.", "### Languages\n\nThe BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.", "## Dataset Structure", "### Data Instances\n\nFor each instance, there is a list of strings for the model prompt, a string for the best generation, and a float for the reward model score.", "### Data Fields\n\n- 'prompt': a list of strings containing the prompts used to query Phi-2, with each string representing a turn.\n- 'generation': a string containing the best-scored Phi-2 generation out of 10, including the formatted prompt.\n- 'rm_score': a float corresponding to the OpenAssistant reward score assigned to the generation.", "### Data Splits\n\nThe dataset is not divided into splits.", "## Dataset Creation", "### Curation Rationale\n\n[N/A]", "### Source Data", "#### Initial Data Collection and Normalization\n\nPrompts were written or selected by a human to capture a range of instruction-following abilities (creativity, factuality, reasoning, etc.).\n\nGenerations were produced for each prompt using the Phi-2 language model. Following the Phi-2 QA and chat prompting instructions, prompts were formatted with the following rationale:\n* single-turn queries: '\"Instruction: {query}\\nOutput: \"'\n* multi-turn queries: '\"Alice: {query_turn_1}\\nBob: {query_turn_2}...\\nAlice: {query_turn_n}\\nBob: \"'\n\nOnly the top-scored out of 10 generations were retained for this dataset; see Annotations for details on the scoring procedure.", "#### Who are the source language producers?\n\nTwo out of the 10 prompts were taken from the GSM8K dataset to probe reasoning ability. The rest were generated by a human writer.\n\nGenerations were produced for each prompt using the Phi-2 language model.", "### Annotations", "#### Annotation process\n\nAll generations are scored using an external reward model.", "#### Who are the annotators?\n\nReward scores were produced for each generation using the Open Assistant reward model.", "### Personal and Sensitive Information\n\nNo personal and sensitive information.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nSome prompts are deliberately designed to probe models' ability to generate or reward safe and harmless text. Prompts and generation may therefore contain offensive or toxic language.", "### Discussion of Biases\n\n[N/A]", "### Other Known Limitations\n\nThe limited size of the dataset does not allow to thoroughly explore all aspects of model instruction-following ability.", "## Additional Information", "### Dataset Curators\n\nThe dataset was collected by Alizée Pace.", "### Licensing Information\n\nThe dataset is released under the Apache-2.0 License. \n\n\n\n[N/A]", "### Contributions\n\nThanks to @alizeepace for adding this dataset." ]
[ 115, 19, 125, 14, 64, 40, 62, 6, 39, 93, 16, 5, 12, 4, 198, 58, 5, 17, 25, 14, 8, 46, 13, 32, 5, 19, 24, 18 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-text-generation #task_categories-conversational #annotations_creators-machine-generated #language_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-GSM8K #language-English #license-apache-2.0 #region-us \n# Dataset Card for Rejection Sampling Phi-2 with OpenAssistant RM## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Point of Contact: Alizée Pace### Dataset Summary\n\nThe \"Rejection Sampling Phi-2 with OpenAssistant RM\" dataset consists of 10 pairs of prompts and responses, which were generated using rejection sampling over 10 Phi-2 generation using the OpenAssistant Reward Model.### Supported Tasks and Leaderboards\n\nThe dataset and its creation rationale could be used to support models for question-answering, text-generation, or conversational agents.### Languages\n\nThe BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.## Dataset Structure### Data Instances\n\nFor each instance, there is a list of strings for the model prompt, a string for the best generation, and a float for the reward model score.", "passage: ### Data Fields\n\n- 'prompt': a list of strings containing the prompts used to query Phi-2, with each string representing a turn.\n- 'generation': a string containing the best-scored Phi-2 generation out of 10, including the formatted prompt.\n- 'rm_score': a float corresponding to the OpenAssistant reward score assigned to the generation.### Data Splits\n\nThe dataset is not divided into splits.## Dataset Creation### Curation Rationale\n\n[N/A]### Source Data#### Initial Data Collection and Normalization\n\nPrompts were written or selected by a human to capture a range of instruction-following abilities (creativity, factuality, reasoning, etc.).\n\nGenerations were produced for each prompt using the Phi-2 language model. Following the Phi-2 QA and chat prompting instructions, prompts were formatted with the following rationale:\n* single-turn queries: '\"Instruction: {query}\\nOutput: \"'\n* multi-turn queries: '\"Alice: {query_turn_1}\\nBob: {query_turn_2}...\\nAlice: {query_turn_n}\\nBob: \"'\n\nOnly the top-scored out of 10 generations were retained for this dataset; see Annotations for details on the scoring procedure.#### Who are the source language producers?\n\nTwo out of the 10 prompts were taken from the GSM8K dataset to probe reasoning ability. The rest were generated by a human writer.\n\nGenerations were produced for each prompt using the Phi-2 language model.### Annotations#### Annotation process\n\nAll generations are scored using an external reward model.#### Who are the annotators?\n\nReward scores were produced for each generation using the Open Assistant reward model.### Personal and Sensitive Information\n\nNo personal and sensitive information.## Considerations for Using the Data### Social Impact of Dataset\n\nSome prompts are deliberately designed to probe models' ability to generate or reward safe and harmless text. Prompts and generation may therefore contain offensive or toxic language.### Discussion of Biases\n\n[N/A]### Other Known Limitations\n\nThe limited size of the dataset does not allow to thoroughly explore all aspects of model instruction-following ability.## Additional Information### Dataset Curators\n\nThe dataset was collected by Alizée Pace.### Licensing Information\n\nThe dataset is released under the Apache-2.0 License. \n\n\n\n[N/A]" ]
7fe4f22caec438cb2f157dcb80f9409d190cf5d1
Kannada translation of jondurbin/airoboros-3.2
Tensoic/airoboros-3.2_kn
[ "task_categories:text-generation", "language:kn", "license:cc-by-4.0", "region:us" ]
2024-01-09T12:33:17+00:00
{"language": ["kn"], "license": "cc-by-4.0", "task_categories": ["text-generation"]}
2024-01-09T15:34:28+00:00
[]
[ "kn" ]
TAGS #task_categories-text-generation #language-Kannada #license-cc-by-4.0 #region-us
Kannada translation of jondurbin/airoboros-3.2
[]
[ "TAGS\n#task_categories-text-generation #language-Kannada #license-cc-by-4.0 #region-us \n" ]
[ 31 ]
[ "passage: TAGS\n#task_categories-text-generation #language-Kannada #license-cc-by-4.0 #region-us \n" ]
ffd1156a123de1a61932d57e7a6904b9f56ef369
# Dataset Card for SimpleQuestions-SPARQLtoText ## Table of Contents - [Dataset Card for SimpleQuestions-SPARQLtoText](#dataset-card-for-simplequestions-sparqltotext) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [JSON fields](#json-fields) - [Format of the SPARQL queries](#format-of-the-sparql-queries) - [Answerable/unanswerable](#answerableunanswerable) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Types of questions](#types-of-questions) - [Data splits](#data-splits) - [Additional information](#additional-information) - [Related datasets](#related-datasets) - [Licencing information](#licencing-information) - [Citation information](#citation-information) - [This version of the corpus (with normalized SPARQL queries)](#this-version-of-the-corpus-with-normalized-sparql-queries) - [Original version](#original-version) ## Dataset Description - **Paper:** [SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications (AACL-IJCNLP 2022)](https://aclanthology.org/2022.aacl-main.11/) - **Point of Contact:** Gwénolé Lecorvé ### Dataset Summary Special version of [SimpleQuestions](https://github.com/askplatypus/wikidata-simplequestions) with SPARQL queries formatted for the SPARQL-to-Text task. #### JSON fields The original version of SimpleQuestions is a raw text file listing triples and the natural language question. A JSON version has been generated and augmented with the following fields: * `rdf_subject`, `rdf_property`, `rdf_object`: triple in the Wikidata format (IDs) * `nl_subject`, `nl_property`, `nl_object`: triple with labels retrieved from Wikidata. Some entities do not have labels, they are labelled as `UNDEFINED_LABEL` * `sparql_query`: SPARQL query with Wikidata IDs * `verbalized_sparql_query`: SPARQL query with labels * `original_nl_question`: original natural language question from SimpleQuestions. This is in **lower case**. * `recased_nl_question`: Version of `original_nl_question` where the named entities have been automatically recased based on the labels of the entities. #### Format of the SPARQL queries * Randomizing the variables names * Delimiters are spaced #### Answerable/unanswerable Some questions in SimpleQuestions cannot be answered. Hence, it originally comes with 2 versions for the train/valid/test sets: one with all entries, another with the answerable questions only. ### Languages - English ## Dataset Structure ### Types of questions Comparison of question types compared to related datasets: | | | [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) | [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) | [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) | [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) | [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) | |--------------------------|-----------------|:---------------:|:------:|:-----------:|:----:|:---------:| | **Number of triplets in query** | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | 2 | | ✓ | ✓ | ✓ | ✓ | | | More | | | ✓ | ✓ | ✓ | | **Logical connector between triplets** | Conjunction | ✓ | ✓ | ✓ | ✓ | ✓ | | | Disjunction | | | | ✓ | ✓ | | | Exclusion | | | | ✓ | ✓ | | **Topology of the query graph** | Direct | ✓ | ✓ | ✓ | ✓ | ✓ | | | Sibling | | ✓ | ✓ | ✓ | ✓ | | | Chain | | ✓ | ✓ | ✓ | ✓ | | | Mixed | | | ✓ | | ✓ | | | Other | | ✓ | ✓ | ✓ | ✓ | | **Variable typing in the query** | None | ✓ | ✓ | ✓ | ✓ | ✓ | | | Target variable | | ✓ | ✓ | ✓ | ✓ | | | Internal variable | | ✓ | ✓ | ✓ | ✓ | | **Comparisons clauses** | None | ✓ | ✓ | ✓ | ✓ | ✓ | | | String | | | ✓ | | ✓ | | | Number | | | ✓ | ✓ | ✓ | | | Date | | | ✓ | | ✓ | | **Superlative clauses** | No | ✓ | ✓ | ✓ | ✓ | ✓ | | | Yes | | | | ✓ | | | **Answer type** | Entity (open) | ✓ | ✓ | ✓ | ✓ | ✓ | | | Entity (closed) | | | | ✓ | ✓ | | | Number | | | ✓ | ✓ | ✓ | | | Boolean | | ✓ | ✓ | ✓ | ✓ | | **Answer cardinality** | 0 (unanswerable) | | | ✓ | | ✓ | | | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | More | | ✓ | ✓ | ✓ | ✓ | | **Number of target variables** | 0 (⇒ ASK verb) | | ✓ | ✓ | ✓ | ✓ | | | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | 2 | | | ✓ | | ✓ | | **Dialogue context** | Self-sufficient | ✓ | ✓ | ✓ | ✓ | ✓ | | | Coreference | | | | ✓ | ✓ | | | Ellipsis | | | | ✓ | ✓ | | **Meaning** | Meaningful | ✓ | ✓ | ✓ | ✓ | ✓ | | | Non-sense | | | | | ✓ | ### Data splits Text verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries. | | Train | Validation | Test | | --------------------- | ---------- | ---------- | ---------- | | Questions | 34,000 | 5,000 | 10,000 | | NL question per query | 1 | | Characters per query | 70 (± 10) | | Tokens per question | 7.4 (± 2.1) | ## Additional information ### Related datasets This corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely: - Non conversational datasets - [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) (from https://github.com/askplatypus/wikidata-simplequestions) - [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) (from https://github.com/barshana-banerjee/ParaQA) - [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) (from http://lc-quad.sda.tech/) - Conversational datasets - [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) (from https://amritasaha1812.github.io/CSQA/) - [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) (derived from https://gitlab.com/shimorina/webnlg-dataset/-/tree/master/release_v3.0) ### Licencing information * Content from original dataset: CC-BY 3.0 * New content: CC BY-SA 4.0 ### Citation information #### This version of the corpus (with normalized SPARQL queries) ```bibtex @inproceedings{lecorve2022sparql2text, title={SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications}, author={Lecorv\'e, Gw\'enol\'e and Veyret, Morgan and Brabant, Quentin and Rojas-Barahona, Lina M.}, journal={Proceedings of the Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP)}, year={2022} } ``` #### Original version ```bibtex @article{bordes2015large, title={Large-scale simple question answering with memory networks}, author={Bordes, Antoine and Usunier, Nicolas and Chopra, Sumit and Weston, Jason}, journal={arXiv preprint arXiv:1506.02075}, year={2015} } ```
OrangeInnov/simplequestions-sparqltotext
[ "task_categories:question-answering", "task_categories:text-generation", "language:en", "qa", "knowledge-graph", "sparql", "region:us" ]
2024-01-09T12:36:50+00:00
{"language": ["en"], "task_categories": ["question-answering", "text-generation"], "dataset_info": {"features": [{"name": "original_nl_question", "dtype": "string"}, {"name": "recased_nl_question", "dtype": "string"}, {"name": "sparql_query", "dtype": "string"}, {"name": "verbalized_sparql_query", "dtype": "string"}, {"name": "nl_subject", "dtype": "string"}, {"name": "nl_property", "dtype": "string"}, {"name": "nl_object", "dtype": "string"}, {"name": "nl_answer", "dtype": "string"}, {"name": "rdf_subject", "dtype": "string"}, {"name": "rdf_property", "dtype": "string"}, {"name": "rdf_object", "dtype": "string"}, {"name": "rdf_answer", "dtype": "string"}, {"name": "rdf_target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11403929, "num_examples": 34374}, {"name": "validation", "num_bytes": 1614051, "num_examples": 4867}, {"name": "test", "num_bytes": 3304281, "num_examples": 9961}], "download_size": 7595264, "dataset_size": 16322261}, "tags": ["qa", "knowledge-graph", "sparql"]}
2024-01-11T13:18:06+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #task_categories-text-generation #language-English #qa #knowledge-graph #sparql #region-us
Dataset Card for SimpleQuestions-SPARQLtoText ============================================= Table of Contents ----------------- * Dataset Card for SimpleQuestions-SPARQLtoText + Table of Contents + Dataset Description - Dataset Summary * JSON fields * Format of the SPARQL queries * Answerable/unanswerable - Languages + Dataset Structure - Types of questions - Data splits + Additional information - Related datasets - Licencing information - Citation information * This version of the corpus (with normalized SPARQL queries) * Original version Dataset Description ------------------- * Paper: SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications (AACL-IJCNLP 2022) * Point of Contact: Gwénolé Lecorvé ### Dataset Summary Special version of SimpleQuestions with SPARQL queries formatted for the SPARQL-to-Text task. #### JSON fields The original version of SimpleQuestions is a raw text file listing triples and the natural language question. A JSON version has been generated and augmented with the following fields: * 'rdf\_subject', 'rdf\_property', 'rdf\_object': triple in the Wikidata format (IDs) * 'nl\_subject', 'nl\_property', 'nl\_object': triple with labels retrieved from Wikidata. Some entities do not have labels, they are labelled as 'UNDEFINED\_LABEL' * 'sparql\_query': SPARQL query with Wikidata IDs * 'verbalized\_sparql\_query': SPARQL query with labels * 'original\_nl\_question': original natural language question from SimpleQuestions. This is in lower case. * 'recased\_nl\_question': Version of 'original\_nl\_question' where the named entities have been automatically recased based on the labels of the entities. #### Format of the SPARQL queries * Randomizing the variables names * Delimiters are spaced #### Answerable/unanswerable Some questions in SimpleQuestions cannot be answered. Hence, it originally comes with 2 versions for the train/valid/test sets: one with all entries, another with the answerable questions only. ### Languages * English Dataset Structure ----------------- ### Types of questions Comparison of question types compared to related datasets: ### Data splits Text verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries. Additional information ---------------------- ### Related datasets This corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely: * Non conversational datasets + SimpleQuestions (from URL + ParaQA (from URL + LC-QuAD 2.0 (from URL * Conversational datasets + CSQA (from URL + WebNLQ-QA (derived from URL ### Licencing information * Content from original dataset: CC-BY 3.0 * New content: CC BY-SA 4.0 information #### This version of the corpus (with normalized SPARQL queries) #### Original version
[ "### Dataset Summary\n\n\nSpecial version of SimpleQuestions with SPARQL queries formatted for the SPARQL-to-Text task.", "#### JSON fields\n\n\nThe original version of SimpleQuestions is a raw text file listing triples and the natural language question. A JSON version has been generated and augmented with the following fields:\n\n\n* 'rdf\\_subject', 'rdf\\_property', 'rdf\\_object': triple in the Wikidata format (IDs)\n* 'nl\\_subject', 'nl\\_property', 'nl\\_object': triple with labels retrieved from Wikidata. Some entities do not have labels, they are labelled as 'UNDEFINED\\_LABEL'\n* 'sparql\\_query': SPARQL query with Wikidata IDs\n* 'verbalized\\_sparql\\_query': SPARQL query with labels\n* 'original\\_nl\\_question': original natural language question from SimpleQuestions. This is in lower case.\n* 'recased\\_nl\\_question': Version of 'original\\_nl\\_question' where the named entities have been automatically recased based on the labels of the entities.", "#### Format of the SPARQL queries\n\n\n* Randomizing the variables names\n* Delimiters are spaced", "#### Answerable/unanswerable\n\n\nSome questions in SimpleQuestions cannot be answered. Hence, it originally comes with 2 versions for the train/valid/test sets: one with all entries, another with the answerable questions only.", "### Languages\n\n\n* English\n\n\nDataset Structure\n-----------------", "### Types of questions\n\n\nComparison of question types compared to related datasets:", "### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.\n\n\n\nAdditional information\n----------------------", "### Related datasets\n\n\nThis corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely:\n\n\n* Non conversational datasets\n\t+ SimpleQuestions (from URL\n\t+ ParaQA (from URL\n\t+ LC-QuAD 2.0 (from URL\n* Conversational datasets\n\t+ CSQA (from URL\n\t+ WebNLQ-QA (derived from URL", "### Licencing information\n\n\n* Content from original dataset: CC-BY 3.0\n* New content: CC BY-SA 4.0\n\n\ninformation", "#### This version of the corpus (with normalized SPARQL queries)", "#### Original version" ]
[ "TAGS\n#task_categories-question-answering #task_categories-text-generation #language-English #qa #knowledge-graph #sparql #region-us \n", "### Dataset Summary\n\n\nSpecial version of SimpleQuestions with SPARQL queries formatted for the SPARQL-to-Text task.", "#### JSON fields\n\n\nThe original version of SimpleQuestions is a raw text file listing triples and the natural language question. A JSON version has been generated and augmented with the following fields:\n\n\n* 'rdf\\_subject', 'rdf\\_property', 'rdf\\_object': triple in the Wikidata format (IDs)\n* 'nl\\_subject', 'nl\\_property', 'nl\\_object': triple with labels retrieved from Wikidata. Some entities do not have labels, they are labelled as 'UNDEFINED\\_LABEL'\n* 'sparql\\_query': SPARQL query with Wikidata IDs\n* 'verbalized\\_sparql\\_query': SPARQL query with labels\n* 'original\\_nl\\_question': original natural language question from SimpleQuestions. This is in lower case.\n* 'recased\\_nl\\_question': Version of 'original\\_nl\\_question' where the named entities have been automatically recased based on the labels of the entities.", "#### Format of the SPARQL queries\n\n\n* Randomizing the variables names\n* Delimiters are spaced", "#### Answerable/unanswerable\n\n\nSome questions in SimpleQuestions cannot be answered. Hence, it originally comes with 2 versions for the train/valid/test sets: one with all entries, another with the answerable questions only.", "### Languages\n\n\n* English\n\n\nDataset Structure\n-----------------", "### Types of questions\n\n\nComparison of question types compared to related datasets:", "### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.\n\n\n\nAdditional information\n----------------------", "### Related datasets\n\n\nThis corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely:\n\n\n* Non conversational datasets\n\t+ SimpleQuestions (from URL\n\t+ ParaQA (from URL\n\t+ LC-QuAD 2.0 (from URL\n* Conversational datasets\n\t+ CSQA (from URL\n\t+ WebNLQ-QA (derived from URL", "### Licencing information\n\n\n* Content from original dataset: CC-BY 3.0\n* New content: CC BY-SA 4.0\n\n\ninformation", "#### This version of the corpus (with normalized SPARQL queries)", "#### Original version" ]
[ 43, 33, 262, 26, 55, 13, 18, 56, 88, 26, 18, 4 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-text-generation #language-English #qa #knowledge-graph #sparql #region-us \n### Dataset Summary\n\n\nSpecial version of SimpleQuestions with SPARQL queries formatted for the SPARQL-to-Text task.#### JSON fields\n\n\nThe original version of SimpleQuestions is a raw text file listing triples and the natural language question. A JSON version has been generated and augmented with the following fields:\n\n\n* 'rdf\\_subject', 'rdf\\_property', 'rdf\\_object': triple in the Wikidata format (IDs)\n* 'nl\\_subject', 'nl\\_property', 'nl\\_object': triple with labels retrieved from Wikidata. Some entities do not have labels, they are labelled as 'UNDEFINED\\_LABEL'\n* 'sparql\\_query': SPARQL query with Wikidata IDs\n* 'verbalized\\_sparql\\_query': SPARQL query with labels\n* 'original\\_nl\\_question': original natural language question from SimpleQuestions. This is in lower case.\n* 'recased\\_nl\\_question': Version of 'original\\_nl\\_question' where the named entities have been automatically recased based on the labels of the entities.#### Format of the SPARQL queries\n\n\n* Randomizing the variables names\n* Delimiters are spaced#### Answerable/unanswerable\n\n\nSome questions in SimpleQuestions cannot be answered. Hence, it originally comes with 2 versions for the train/valid/test sets: one with all entries, another with the answerable questions only.### Languages\n\n\n* English\n\n\nDataset Structure\n-----------------### Types of questions\n\n\nComparison of question types compared to related datasets:### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.\n\n\n\nAdditional information\n----------------------" ]
cdd5f044768ffaf43a2ba06f5e3015aa16cbd845
## 1000 examples of law items law_item.jsonl contains 1000 samples of current and effective Chinese laws. e.g. ```python {"title": "《中华人民共和国劳动合同法(2012修正)》", "classification": "类别 : 劳动合同营商环境优化 ", "num": "第十九条", "contents": "第十九条【试用期】劳动合同期限三个月以上不满一年的,试用期不得超过一个月;劳动合同期限一年以上不满三年的,试用期不得超过二个月;三年以上固定期限和无固定期限的劳动合同,试用期不得超过六个月。同一用人单位与同一劳动者只能约定一次试用期。以完成一定工作任务为期限的劳动合同或者劳动合同期限不满三个月的,不得约定试用期。试用期包含在劳动合同期限内。劳动合同仅约定试用期的,试用期不成立,该期限为劳动合同期限。"} ``` ## Using BGE Embedding to compute similarity between query and law item ```python from FlagEmbedding import FlagModel from datasets import load_dataset dataset = load_dataset("FarReelAILab/law_examples") model = FlagModel('BAAI/bge-large-zh-v1.5', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:", use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation queries = ['我是中职生,在一家酒店工作,试用期内上班7天了,现在辞职工资还会有吗?', '工作时间每天12个小时,早上十点上班中间基本不休息,然后到晚上11-12点下班有时候说加班就加班想辞职走人酒店压着工资不让走怎么办?'] passages = [f"法律:{dataset['train'][11]['title']},法条:{dataset['train'][11]['contents']}", f"法律:{dataset['train'][173]['title']},法条:{dataset['train'][173]['contents']}"] print(f"法律:{dataset['train'][11]['title']},法条:{dataset['train'][11]['contents']}") print(f"法律:{dataset['train'][173]['title']},法条:{dataset['train'][173]['contents']}") q_embeddings = model.encode_queries(queries) p_embeddings = model.encode(passages) scores = q_embeddings @ p_embeddings.T print(scores) ``` output: ```python 法律:《中华人民共和国劳动合同法(2012修正)》,法条:第十九条【试用期】劳动合同期限三个月以上不满一年的,试用期不得超过一个月;劳动合同期限一年以上不满三年的,试用期不得超过二个月;三年以上固定期限和无固定期限的劳动合同,试用期不得超过六个月。同一用人单位与同一劳动者只能约定一次试用期。以完成一定工作任务为期限的劳动合同或者劳动合同期限不满三个月的,不得约定试用期。试用期包含在劳动合同期限内。劳动合同仅约定试用期的,试用期不成立,该期限为劳动合同期限。 法律:《中华人民共和国劳动法(2018修正)》,法条:第四十三条用人单位不得违反本法规定延长劳动者的工作时间。 [[0.7573 0.72 ] [0.7417 0.7734]] ```
FarReelAILab/law_examples
[ "license:apache-2.0", "region:us" ]
2024-01-09T12:43:20+00:00
{"license": "apache-2.0"}
2024-01-09T14:20:54+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
## 1000 examples of law items law_item.jsonl contains 1000 samples of current and effective Chinese laws. e.g. ## Using BGE Embedding to compute similarity between query and law item output:
[ "## 1000 examples of law items\nlaw_item.jsonl contains 1000 samples of current and effective Chinese laws. e.g.", "## Using BGE Embedding to compute similarity between query and law item\n\n\noutput:" ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "## 1000 examples of law items\nlaw_item.jsonl contains 1000 samples of current and effective Chinese laws. e.g.", "## Using BGE Embedding to compute similarity between query and law item\n\n\noutput:" ]
[ 14, 31, 21 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n## 1000 examples of law items\nlaw_item.jsonl contains 1000 samples of current and effective Chinese laws. e.g.## Using BGE Embedding to compute similarity between query and law item\n\n\noutput:" ]
7e7ff471a9244549c39e04442baa9704a5360387
# assignment-2 Dataset for the second assignment of the Deep Learning course at my university. The dataset appears to be based on the [WinoGrande](https://winogrande.allenai.org) train_xl set (for the original training.jsonl) and dev/validation set (for test.jsonl). The provided training set has been randomly split into training (85%) and validation (15%) sets. ### Citation Information ``` @InProceedings{ai2:winogrande, title = {WinoGrande: An Adversarial Winograd Schema Challenge at Scale}, authors={Keisuke, Sakaguchi and Ronan, Le Bras and Chandra, Bhagavatula and Yejin, Choi }, year={2019} } ``` ### Credits Credits to allenai and the authors of the WinoGrande dataset. The original dataset repository can be found on [github](https://github.com/allenai/winogrande).
rw-r-r-0644/assignment-2
[ "task_categories:fill-mask", "task_categories:text-classification", "task_categories:text2text-generation", "size_categories:10K<n<100K", "language:en", "license:cc-by-2.0", "region:us" ]
2024-01-09T12:46:19+00:00
{"language": ["en"], "license": "cc-by-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["fill-mask", "text-classification", "text2text-generation"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train.jsonl"}, {"split": "validation", "path": "validation.jsonl"}, {"split": "test", "path": "test.jsonl"}]}]}
2024-01-09T20:21:12+00:00
[]
[ "en" ]
TAGS #task_categories-fill-mask #task_categories-text-classification #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-cc-by-2.0 #region-us
# assignment-2 Dataset for the second assignment of the Deep Learning course at my university. The dataset appears to be based on the WinoGrande train_xl set (for the original URL) and dev/validation set (for URL). The provided training set has been randomly split into training (85%) and validation (15%) sets. ### Credits Credits to allenai and the authors of the WinoGrande dataset. The original dataset repository can be found on github.
[ "# assignment-2\nDataset for the second assignment of the Deep Learning course at my university. \nThe dataset appears to be based on the WinoGrande train_xl set (for the original URL) and dev/validation set (for URL). \nThe provided training set has been randomly split into training (85%) and validation (15%) sets.", "### Credits\nCredits to allenai and the authors of the WinoGrande dataset.\nThe original dataset repository can be found on github." ]
[ "TAGS\n#task_categories-fill-mask #task_categories-text-classification #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-cc-by-2.0 #region-us \n", "# assignment-2\nDataset for the second assignment of the Deep Learning course at my university. \nThe dataset appears to be based on the WinoGrande train_xl set (for the original URL) and dev/validation set (for URL). \nThe provided training set has been randomly split into training (85%) and validation (15%) sets.", "### Credits\nCredits to allenai and the authors of the WinoGrande dataset.\nThe original dataset repository can be found on github." ]
[ 66, 76, 36 ]
[ "passage: TAGS\n#task_categories-fill-mask #task_categories-text-classification #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-cc-by-2.0 #region-us \n# assignment-2\nDataset for the second assignment of the Deep Learning course at my university. \nThe dataset appears to be based on the WinoGrande train_xl set (for the original URL) and dev/validation set (for URL). \nThe provided training set has been randomly split into training (85%) and validation (15%) sets.### Credits\nCredits to allenai and the authors of the WinoGrande dataset.\nThe original dataset repository can be found on github." ]
40e0573769d55578317e3421f84cb52b877dc61d
## verdicts examples verdicts_200.jsonl contains 200 examples of verdicts from Chinese Judgements Online, we process the datasets for semantic retrieval ## using BGE to compute similarity between query and verdict ```python from FlagEmbedding import FlagModel from datasets import load_dataset dataset = load_dataset("FarReelAILab/verdicts") model = FlagModel('BAAI/bge-large-zh-v1.5', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:", use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation queries = ['撞车后,交警不给出全责的认定书,对方车又不签字,事情就将起来了,我该怎么办', '因为做生意资金不足,借款高利贷,写下凭据到时还不了钱就把90㎡的房子抵押给高利贷方这凭据有没有法律效益?'] passages = [dataset['train'][11]['文书内容'], dataset['train'][173]['文书内容']] print(dataset['train'][11]['文书内容']) print(dataset['train'][173]['文书内容'],) q_embeddings = model.encode_queries(queries) p_embeddings = model.encode(passages) scores = q_embeddings @ p_embeddings.T print(scores) ``` output: ```python 山东省邹平县人民法院 民 事 判 决 书 (2017)鲁1626民初1415号 原告:袁国庆。 委托诉讼代理人:郭甜甜,山东远识律师事务所律师(特别授权代理)。 被告:张丽娟。 被告:中国人民财产保险股份有限公司淄博市分公司,住所地张店区。 负责人:展海勇,保险公司总经理。 委托诉讼代理人:段秉超,山东博睿(淄博)律师事务所律师(特别授权代理)。 原告袁国庆与被告张丽娟、中国人民财产保险股份有限公司淄博市分公司(以下简称保险公司)机动车交通事故责任纠纷一案,本院于2017年4月12日立案后,依法适用简易程序于2016年6月5日公开开庭进行了审理。原告袁国庆的委托诉讼代理人郭甜甜、被告张丽娟、被告保险公司的委托诉讼代理人段秉超均到庭参加诉讼。本案现已审理终结。 原告袁国庆向本院提出诉讼请求:1.依法判令被告立即赔偿原告的各项费用共计38000元;2.由被告承担本案的一切诉讼费用。诉讼过程中,原告袁国庆增加诉讼请求至108000元。事实与理由:2016年4月20日6时30分左右,被告张丽娟驾驶鲁C×××××号轿车由南向北行驶至邹平县苑城路口处时,与由东向西行驶的原告驾驶的鲁V×××××号二轮摩托车发生事故,致原告受伤、摩托车损坏。该事故经邹平县公安局交警部门认定,被告张丽娟负事故的全部责任,原告无事故责任。因赔偿事宜,原告诉至本院。 ... 被告将赔偿款直接汇入原告袁国庆中国邮储银行焦桥支行账号62×××25。 被告将应负担的诉讼费汇入邹平县人民法院在中国建���银行邹平支行的账号:37×××00。 如不服本判决,可以在判决书送达之日起十五日内,向本院递交上诉状,并按对方当事人的人数或者代表人的人数提出副本,上诉于山东省滨州市中级人民法院。 审判员  梁姗姗 二〇一七年六月十五日 书记员  刘传龙 江苏省连云港市中级人民法院 民 事 判 决 书 (2021)苏07民终780号 上诉人(原审被告):蔡宽跃,男,1992年11月8日生,汉族,新云台码头有限公司员工,住连云区。 委托诉讼代理人:顾东杰,江苏新浦律师事务所律师。 被上诉人(原审原告):刘书麟,男,1993年2月12日生,汉族,徐圩新区应急抢险救援大队员工,住连云区。 上诉人蔡宽跃因与被上诉人刘书麟民间借贷纠纷一案,不服连云港市连云区人民法院(2020)苏0703民初1730号民事判决,向本院提起上诉。本院于2021年2月19日立案后,依法组成合议庭并于同年4月6日公开开庭进行了审理。上诉人蔡宽跃的委托诉讼代理人顾东杰、被上诉人刘书麟到庭参加诉讼。本案现已审理终结。 上诉人蔡宽跃上诉请求:1、请求撤销连云区人民法院(202⑴苏0703民初1730号民事判决书,发回重审或者依法改判上诉人给付被上诉人借款本金14572元;2、一二审诉讼费用由被上诉人承担。事实与理由:上诉人与被上诉人之间存在多笔高利贷借款,上诉人已经超额返还被上诉人借款高利息,应当直接从冲抵本案借款本金,具体如下:1、2018年8月11日,被上诉人出借上诉人3万元,双方约定2018年11月还款,当天上诉人支付月息1500元,被上诉人实际出借28500元。双方约定月息1500元已经超过原民间借贷司法解释规定的年息24%标准,超过部分应当认定为还借款本金。根据一审被上诉人自认,上诉人于2018年9月12日支付月息1500元,2018年8月11日至2018年9月12日上诉人应付利息589元,实际支付1500元减去应付利息589元,超出的911元应当认定为偿还借款本金,故截止到2018年9月12日,尚欠借款本金27589元;2018年10月15日支付月息1500元,2018年9月13曰至2018年10月15日上诉人应付利息570元,实际支付1500元减去应付利息570元,超出的930元应当认定为偿还借款本金,故截止到2018年10月15日,尚欠借款本金26659元;11月份借款到期后,双方又约定续借1个月,上诉人于2018年11月14日支付借款利息3500元,2018年10月16日至2018年11月14日上诉人应付利息497元,实际支付4500元减去应付利息497元,超出的4003元应当认定为偿还借款本金,故截止到2018年11月14日,尚欠借款本金22656元;2018年12月15日上诉人支付30000元,2018年11月15日至2018年12月15日上诉人应付利息438元,实际支付30000元减去应付利息438元,超出的29562元应当认定为偿还借款本金,故截止到2018年12月15日,上诉人就该笔借款还款超出6906元,该金额应当在本案中予以冲抵。2、2019年1月21日,上诉人以案外人杨某名义向被上诉人借款50000元,上诉人当天支付日息600元,被上诉人实际出借49400元,双方约定日息600元已经超过原民间借贷司法解释规定的年息24%标准,超过部分应当认定为还借款本金。该笔借款上诉人于2019年2月17日支付被上诉人50000元,期间均按照每日600元支付利息。2019年1月21日至2019年2月17日上诉人应支付利息为856元,而在此期间上诉人支付利息共计15600元,超出的14744元应当认定为偿还借款本金,故截止到2019年2月17日,上诉人就该笔借款还款超出15344元,该金额应当在本案中予以冲抵。综上,上诉人与被上诉人之间的多笔高利贷借款,上诉人多还款合计22250元,上述借款的还款事实一审被上诉人均予以认可,所以应当从本案争议借款本金中扣除22250元。一审审理过程中,忽略该部分事实,在判决中没有予以冲抵系事实认定错误,故请求二审法院查清事实,依法支持上诉人的上诉请求。 被上诉人刘书麟答辩认为:驳回上诉人的上诉请求,一审对本案事实已经查明了。 在一审审理中,刘书麟诉请:判令被告偿还借款本金5万元及自2019年5月1日起至2020年11月11日止按中国人民银行同期贷款利率四倍计算的利息;判令被告承担本案全部诉讼费用;判令被告承担原告第一次诉讼的律师费2500元。 蔡宽跃一审辩称,刘书麟、蔡宽跃系同学关系,刘书麟原来的名字叫刘泰,蔡宽确实向其借过5万块钱,并于2019年1月2日出具了一份借款协议,一张身份证复印件,注明该复印件用于向刘书麟借款5万元,用于资金周转,于2019年5月1日归还;还出具了一张借条,注明借款5万元用于资金周转,于2019年5月1日归还,还出具了一份借款抵押协议,蔡宽跃用其自有车辆提供抵押,也是借款5万元,于2019年5月1日归还;还与刘某1共同出具过一张借条,也是借款5万元,于2019年5月1日归还,刘某1用其房产提供担保。在(2019)苏0703民初2444号案中,刘书麟提供的是身份证复印件这张条子,并且在2019年11月14日庭审过程中声称该5万元借款没有利息,刘书麟在2444号案中提供的证据是一个借条,在本案中又提供了借款协议的复印件,如果刘书麟确实想要把本案说清楚,应当把所有的协议、借条全部一次性提交,不能每一次拿出不同的证据来主张权利,如果本案再得不到支持,有可能还拿出其他的借款协议、担保协议等等来起诉,这种行为也是一种虚假诉讼的行为。第一次起诉没有利息,而本次又提出四倍的利息,本身就是一种虚假诉讼的行为。在2444号案中蔡宽跃陈述当时借款是转账5万元,当日又通过银行转账向刘书麟付了4000元,这种行为本身就是一种套路贷的表现形式,对于该4000元刘书麟在庭审时也是认可的。正是基于本案当时约定的利息,蔡宽跃之后将该笔款项已经偿还给了刘书麟,在2444号案结束以后,蔡宽跃通过网络多种方式查询到了还款记录。刘书麟在本案中主张的之前案件的律师费是没有依据的。 ... 本院认为,上诉人蔡宽跃在上诉中主张和理由,均为其在一审中作为被告时的抗辩主张和理由,而对于上述主张和理由,一审判决均给予充分的回应,并作出了不在本案中一并处理的结论,本院认为一审判决的这一处理结论,并无不当,故,对于上诉人的相关上诉主张和理由不予支持。 现依据《最高人民法院关于适用时间效力的若干规定》第一条、《中华人民共和国民事诉讼法》第一百七十条第一款第(一)项之规定,判决如下: 驳回上诉,维持原判决。 二审案件受理费1050元(上诉人蔡宽跃已预交),由蔡宽跃负担。 本判决为终审判决。 审判长  安述峰 审判员  刘亚洲 审判员  任李艳 二〇二一年四月九日 书记员  王丹丹 法律条文附录 一、《中华人民共和国民事诉讼法》 第一百七十条第二审人民法院对上诉案件,经过审理,按照下列情形,分别处理:(一)原判决、裁定认定事实清楚,适用法律正确的,以判决、裁定方式驳回上诉,维持原判决、裁定;(二)原判决、裁定认定事实错误或者适用法律错误的,以判决、裁定方式依法改判、撤销或者变更;(三)原判决认定基本事实不清的,裁定撤销原判决,发回原审人民法院重审,或者查清事实后改判;(四)原判决遗漏当事人或者违法缺席判决等严重违反法定程序的,裁定撤销原判决,发回原审人民法院重审。原审人民法院对发回重审的案件作出判决后,当事人提起上诉的,第二审人民法院不得再次发回重审。 [[0.5845 0.4473] [0.4902 0.618 ]] ```
FarReelAILab/verdicts
[ "license:apache-2.0", "region:us" ]
2024-01-09T12:51:04+00:00
{"license": "apache-2.0"}
2024-01-09T16:29:51+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
## verdicts examples verdicts_200.jsonl contains 200 examples of verdicts from Chinese Judgements Online, we process the datasets for semantic retrieval ## using BGE to compute similarity between query and verdict output:
[ "## verdicts examples\nverdicts_200.jsonl contains 200 examples of verdicts from Chinese Judgements Online, we process the datasets for semantic retrieval", "## using BGE to compute similarity between query and verdict\n\noutput:" ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "## verdicts examples\nverdicts_200.jsonl contains 200 examples of verdicts from Chinese Judgements Online, we process the datasets for semantic retrieval", "## using BGE to compute similarity between query and verdict\n\noutput:" ]
[ 14, 43, 17 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n## verdicts examples\nverdicts_200.jsonl contains 200 examples of verdicts from Chinese Judgements Online, we process the datasets for semantic retrieval## using BGE to compute similarity between query and verdict\n\noutput:" ]
c8f49deadd4a1c2aecad1b2c4c196df56470ebae
# Dataset Card for LC-QuAD 2.0 - SPARQLtoText version ## Table of Contents - [Dataset Card for LC-QuAD 2.0 - SPARQLtoText version](#dataset-card-for-lc-quad-20---sparqltotext-version) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [New field `simplified_query`](#new-field-simplified_query) - [New split "valid"](#new-split-valid) - [Supported tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Types of questions](#types-of-questions) - [Data splits](#data-splits) - [Additional information](#additional-information) - [Related datasets](#related-datasets) - [Licencing information](#licencing-information) - [Citation information](#citation-information) - [This version of the corpus (with normalized SPARQL queries)](#this-version-of-the-corpus-with-normalized-sparql-queries) - [Original version](#original-version) ## Dataset Description - **Paper:** [SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications (AACL-IJCNLP 2022)](https://aclanthology.org/2022.aacl-main.11/) - **Point of Contact:** Gwénolé Lecorvé ### Dataset Summary Special version of [LC-QuAD 2.0](https://huggingface.co/datasets/lc_quad) for the SPARQL-to-Text task #### New field `simplified_query` New field is named "simplified_query". It results from applying the following step on the field "query": * Replacing URIs with a simpler format with prefix "resource:", "property:" and "ontology:". * Spacing the delimiters `(`, `{`, `.`, `}`, `)`. * Adding diversity to some filters which test a number (`contains ( ?var, 'number' )` can become `contains ?var = number` * Randomizing the variables names * Shuffling the clauses #### New split "valid" A validation set was randonly extracted from the test set to represent 10% of the whole dataset. ### Supported tasks - Knowledge-based question-answering - Text-to-SPARQL conversion - SPARQL-to-Text conversion ### Languages - English ## Dataset Structure The corpus follows the global architecture from the original version of CSQA (https://amritasaha1812.github.io/CSQA/). There is one directory of the train, dev, and test sets, respectively. Dialogues are stored in separate directories, 100 dialogues per directory. Finally, each dialogue is stored in a JSON file as a list of turns. ### Types of questions Comparison of question types compared to related datasets: | | | [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) | [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) | [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) | [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) | [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) | |--------------------------|-----------------|:---------------:|:------:|:-----------:|:----:|:---------:| | **Number of triplets in query** | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | 2 | | ✓ | ✓ | ✓ | ✓ | | | More | | | ✓ | ✓ | ✓ | | **Logical connector between triplets** | Conjunction | ✓ | ✓ | ✓ | ✓ | ✓ | | | Disjunction | | | | ✓ | ✓ | | | Exclusion | | | | ✓ | ✓ | | **Topology of the query graph** | Direct | ✓ | ✓ | ✓ | ✓ | ✓ | | | Sibling | | ✓ | ✓ | ✓ | ✓ | | | Chain | | ✓ | ✓ | ✓ | ✓ | | | Mixed | | | ✓ | | ✓ | | | Other | | ✓ | ✓ | ✓ | ✓ | | **Variable typing in the query** | None | ✓ | ✓ | ✓ | ✓ | ✓ | | | Target variable | | ✓ | ✓ | ✓ | ✓ | | | Internal variable | | ✓ | ✓ | ✓ | ✓ | | **Comparisons clauses** | None | ✓ | ✓ | ✓ | ✓ | ✓ | | | String | | | ✓ | | ✓ | | | Number | | | ✓ | ✓ | ✓ | | | Date | | | ✓ | | ✓ | | **Superlative clauses** | No | ✓ | ✓ | ✓ | ✓ | ✓ | | | Yes | | | | ✓ | | | **Answer type** | Entity (open) | ✓ | ✓ | ✓ | ✓ | ✓ | | | Entity (closed) | | | | ✓ | ✓ | | | Number | | | ✓ | ✓ | ✓ | | | Boolean | | ✓ | ✓ | ✓ | ✓ | | **Answer cardinality** | 0 (unanswerable) | | | ✓ | | ✓ | | | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | More | | ✓ | ✓ | ✓ | ✓ | | **Number of target variables** | 0 (⇒ ASK verb) | | ✓ | ✓ | ✓ | ✓ | | | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | 2 | | | ✓ | | ✓ | | **Dialogue context** | Self-sufficient | ✓ | ✓ | ✓ | ✓ | ✓ | | | Coreference | | | | ✓ | ✓ | | | Ellipsis | | | | ✓ | ✓ | | **Meaning** | Meaningful | ✓ | ✓ | ✓ | ✓ | ✓ | | | Non-sense | | | | | ✓ | ### Data splits Text verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries. | | Train | Validation | Test | | --------------------- | ---------- | ---------- | ---------- | | Questions | 21,000 | 3,000 | 6,000 | | NL question per query | 1 | | Characters per query | 108 (± 36) | | Tokens per question | 10.6 (± 3.9) | ## Additional information ### Related datasets This corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely: - Non conversational datasets - [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) (from https://github.com/askplatypus/wikidata-simplequestions) - [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) (from https://github.com/barshana-banerjee/ParaQA) - [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) (from http://lc-quad.sda.tech/) - Conversational datasets - [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) (from https://amritasaha1812.github.io/CSQA/) - [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) (derived from https://gitlab.com/shimorina/webnlg-dataset/-/tree/master/release_v3.0) ### Licencing information * Content from original dataset: CC-BY 3.0 * New content: CC BY-SA 4.0 ### Citation information #### This version of the corpus (with normalized SPARQL queries) ```bibtex @inproceedings{lecorve2022sparql2text, title={SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications}, author={Lecorv\'e, Gw\'enol\'e and Veyret, Morgan and Brabant, Quentin and Rojas-Barahona, Lina M.}, journal={Proceedings of the Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP)}, year={2022} } ``` #### Original version ```bibtex @inproceedings{dubey2017lc2, title={LC-QuAD 2.0: A Large Dataset for Complex Question Answering over Wikidata and DBpedia}, author={Dubey, Mohnish and Banerjee, Debayan and Abdelkawi, Abdelrahman and Lehmann, Jens}, booktitle={Proceedings of the 18th International Semantic Web Conference (ISWC)}, year={2019}, organization={Springer} } ```
OrangeInnov/lc_quad2-sparqltotext
[ "task_categories:question-answering", "task_categories:text-generation", "language:en", "qa", "knowledge-graph", "sparql", "region:us" ]
2024-01-09T12:57:17+00:00
{"language": ["en"], "task_categories": ["question-answering", "text-generation"], "dataset_info": {"features": [{"name": "uid", "dtype": "int32"}, {"name": "NNQT_question", "dtype": "string"}, {"name": "paraphrased_question", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "simplified_query", "dtype": "string"}, {"name": "sparql_dbpedia18", "dtype": "string"}, {"name": "sparql_wikidata", "dtype": "string"}, {"name": "answer", "list": "string"}, {"name": "solved_answer", "list": "string"}, {"name": "subgraph", "dtype": "string"}, {"name": "template", "dtype": "string"}, {"name": "template_id", "dtype": "string"}, {"name": "template_index", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 241621115, "num_examples": 21101}, {"name": "validation", "num_bytes": 11306539, "num_examples": 3010}, {"name": "test", "num_bytes": 21146458, "num_examples": 6024}], "download_size": 79003648, "dataset_size": 274074112}, "tags": ["qa", "knowledge-graph", "sparql"]}
2024-01-11T13:15:50+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #task_categories-text-generation #language-English #qa #knowledge-graph #sparql #region-us
Dataset Card for LC-QuAD 2.0 - SPARQLtoText version =================================================== Table of Contents ----------------- * Dataset Card for LC-QuAD 2.0 - SPARQLtoText version + Table of Contents + Dataset Description - Dataset Summary * New field 'simplified\_query' * New split "valid" - Supported tasks - Languages + Dataset Structure - Types of questions - Data splits + Additional information - Related datasets - Licencing information - Citation information * This version of the corpus (with normalized SPARQL queries) * Original version Dataset Description ------------------- * Paper: SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications (AACL-IJCNLP 2022) * Point of Contact: Gwénolé Lecorvé ### Dataset Summary Special version of LC-QuAD 2.0 for the SPARQL-to-Text task #### New field 'simplified\_query' New field is named "simplified\_query". It results from applying the following step on the field "query": * Replacing URIs with a simpler format with prefix "resource:", "property:" and "ontology:". * Spacing the delimiters '(', '{', '.', '}', ')'. * Adding diversity to some filters which test a number ('contains ( ?var, 'number' )' can become 'contains ?var = number' * Randomizing the variables names * Shuffling the clauses #### New split "valid" A validation set was randonly extracted from the test set to represent 10% of the whole dataset. ### Supported tasks * Knowledge-based question-answering * Text-to-SPARQL conversion * SPARQL-to-Text conversion ### Languages * English Dataset Structure ----------------- The corpus follows the global architecture from the original version of CSQA (URL There is one directory of the train, dev, and test sets, respectively. Dialogues are stored in separate directories, 100 dialogues per directory. Finally, each dialogue is stored in a JSON file as a list of turns. ### Types of questions Comparison of question types compared to related datasets: ### Data splits Text verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries. Additional information ---------------------- ### Related datasets This corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely: * Non conversational datasets + SimpleQuestions (from URL + ParaQA (from URL + LC-QuAD 2.0 (from URL * Conversational datasets + CSQA (from URL + WebNLQ-QA (derived from URL ### Licencing information * Content from original dataset: CC-BY 3.0 * New content: CC BY-SA 4.0 information #### This version of the corpus (with normalized SPARQL queries) #### Original version
[ "### Dataset Summary\n\n\nSpecial version of LC-QuAD 2.0 for the SPARQL-to-Text task", "#### New field 'simplified\\_query'\n\n\nNew field is named \"simplified\\_query\". It results from applying the following step on the field \"query\":\n\n\n* Replacing URIs with a simpler format with prefix \"resource:\", \"property:\" and \"ontology:\".\n* Spacing the delimiters '(', '{', '.', '}', ')'.\n* Adding diversity to some filters which test a number ('contains ( ?var, 'number' )' can become 'contains ?var = number'\n* Randomizing the variables names\n* Shuffling the clauses", "#### New split \"valid\"\n\n\nA validation set was randonly extracted from the test set to represent 10% of the whole dataset.", "### Supported tasks\n\n\n* Knowledge-based question-answering\n* Text-to-SPARQL conversion\n* SPARQL-to-Text conversion", "### Languages\n\n\n* English\n\n\nDataset Structure\n-----------------\n\n\nThe corpus follows the global architecture from the original version of CSQA (URL\n\n\nThere is one directory of the train, dev, and test sets, respectively.\n\n\nDialogues are stored in separate directories, 100 dialogues per directory.\n\n\nFinally, each dialogue is stored in a JSON file as a list of turns.", "### Types of questions\n\n\nComparison of question types compared to related datasets:", "### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.\n\n\n\nAdditional information\n----------------------", "### Related datasets\n\n\nThis corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely:\n\n\n* Non conversational datasets\n\t+ SimpleQuestions (from URL\n\t+ ParaQA (from URL\n\t+ LC-QuAD 2.0 (from URL\n* Conversational datasets\n\t+ CSQA (from URL\n\t+ WebNLQ-QA (derived from URL", "### Licencing information\n\n\n* Content from original dataset: CC-BY 3.0\n* New content: CC BY-SA 4.0\n\n\ninformation", "#### This version of the corpus (with normalized SPARQL queries)", "#### Original version" ]
[ "TAGS\n#task_categories-question-answering #task_categories-text-generation #language-English #qa #knowledge-graph #sparql #region-us \n", "### Dataset Summary\n\n\nSpecial version of LC-QuAD 2.0 for the SPARQL-to-Text task", "#### New field 'simplified\\_query'\n\n\nNew field is named \"simplified\\_query\". It results from applying the following step on the field \"query\":\n\n\n* Replacing URIs with a simpler format with prefix \"resource:\", \"property:\" and \"ontology:\".\n* Spacing the delimiters '(', '{', '.', '}', ')'.\n* Adding diversity to some filters which test a number ('contains ( ?var, 'number' )' can become 'contains ?var = number'\n* Randomizing the variables names\n* Shuffling the clauses", "#### New split \"valid\"\n\n\nA validation set was randonly extracted from the test set to represent 10% of the whole dataset.", "### Supported tasks\n\n\n* Knowledge-based question-answering\n* Text-to-SPARQL conversion\n* SPARQL-to-Text conversion", "### Languages\n\n\n* English\n\n\nDataset Structure\n-----------------\n\n\nThe corpus follows the global architecture from the original version of CSQA (URL\n\n\nThere is one directory of the train, dev, and test sets, respectively.\n\n\nDialogues are stored in separate directories, 100 dialogues per directory.\n\n\nFinally, each dialogue is stored in a JSON file as a list of turns.", "### Types of questions\n\n\nComparison of question types compared to related datasets:", "### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.\n\n\n\nAdditional information\n----------------------", "### Related datasets\n\n\nThis corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely:\n\n\n* Non conversational datasets\n\t+ SimpleQuestions (from URL\n\t+ ParaQA (from URL\n\t+ LC-QuAD 2.0 (from URL\n* Conversational datasets\n\t+ CSQA (from URL\n\t+ WebNLQ-QA (derived from URL", "### Licencing information\n\n\n* Content from original dataset: CC-BY 3.0\n* New content: CC BY-SA 4.0\n\n\ninformation", "#### This version of the corpus (with normalized SPARQL queries)", "#### Original version" ]
[ 43, 26, 153, 31, 34, 85, 18, 56, 88, 26, 18, 4 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-text-generation #language-English #qa #knowledge-graph #sparql #region-us \n### Dataset Summary\n\n\nSpecial version of LC-QuAD 2.0 for the SPARQL-to-Text task#### New field 'simplified\\_query'\n\n\nNew field is named \"simplified\\_query\". It results from applying the following step on the field \"query\":\n\n\n* Replacing URIs with a simpler format with prefix \"resource:\", \"property:\" and \"ontology:\".\n* Spacing the delimiters '(', '{', '.', '}', ')'.\n* Adding diversity to some filters which test a number ('contains ( ?var, 'number' )' can become 'contains ?var = number'\n* Randomizing the variables names\n* Shuffling the clauses#### New split \"valid\"\n\n\nA validation set was randonly extracted from the test set to represent 10% of the whole dataset.### Supported tasks\n\n\n* Knowledge-based question-answering\n* Text-to-SPARQL conversion\n* SPARQL-to-Text conversion### Languages\n\n\n* English\n\n\nDataset Structure\n-----------------\n\n\nThe corpus follows the global architecture from the original version of CSQA (URL\n\n\nThere is one directory of the train, dev, and test sets, respectively.\n\n\nDialogues are stored in separate directories, 100 dialogues per directory.\n\n\nFinally, each dialogue is stored in a JSON file as a list of turns.### Types of questions\n\n\nComparison of question types compared to related datasets:### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.\n\n\n\nAdditional information\n----------------------" ]
12d5ecde6cd039eca68b6378fbfea7e0fae5284a
# Dataset Card for ParaQA-SPARQLtoText ## Table of Contents - [Dataset Card for ParaQA-SPARQLtoText](#dataset-card-for-paraqa-sparqltotext) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [New field `simplified_query`](#new-field-simplified_query) - [New split "valid"](#new-split-valid) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Types of questions](#types-of-questions) - [Data splits](#data-splits) - [Additional information](#additional-information) - [Related datasets](#related-datasets) - [Licencing information](#licencing-information) - [Citation information](#citation-information) - [This version of the corpus (with normalized SPARQL queries)](#this-version-of-the-corpus-with-normalized-sparql-queries) - [Original version](#original-version) ## Dataset Description - **Paper:** [SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications (AACL-IJCNLP 2022)](https://aclanthology.org/2022.aacl-main.11/) - **Point of Contact:** Gwénolé Lecorvé ### Dataset Summary Special version of ParaQA with SPARQL queries formatted for the SPARQL-to-Text task #### New field `simplified_query` New field is named "simplified_query". It results from applying the following step on the field "query": * Replacing URIs with a simpler format with prefix "resource:", "property:" and "ontology:". * Spacing the delimiters `(`, `{`, `.`, `}`, `)`. * Randomizing the variables names * Shuffling the clauses #### New split "valid" A validation set was randonly extracted from the test set to represent 10% of the whole dataset. ### Languages - English ## Dataset Structure ### Types of questions Comparison of question types compared to related datasets: | | | [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) | [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) | [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) | [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) | [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) | |--------------------------|-----------------|:---------------:|:------:|:-----------:|:----:|:---------:| | **Number of triplets in query** | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | 2 | | ✓ | ✓ | ✓ | ✓ | | | More | | | ✓ | ✓ | ✓ | | **Logical connector between triplets** | Conjunction | ✓ | ✓ | ✓ | ✓ | ✓ | | | Disjunction | | | | ✓ | ✓ | | | Exclusion | | | | ✓ | ✓ | | **Topology of the query graph** | Direct | ✓ | ✓ | ✓ | ✓ | ✓ | | | Sibling | | ✓ | ✓ | ✓ | ✓ | | | Chain | | ✓ | ✓ | ✓ | ✓ | | | Mixed | | | ✓ | | ✓ | | | Other | | ✓ | ✓ | ✓ | ✓ | | **Variable typing in the query** | None | ✓ | ✓ | ✓ | ✓ | ✓ | | | Target variable | | ✓ | ✓ | ✓ | ✓ | | | Internal variable | | ✓ | ✓ | ✓ | ✓ | | **Comparisons clauses** | None | ✓ | ✓ | ✓ | ✓ | ✓ | | | String | | | ✓ | | ✓ | | | Number | | | ✓ | ✓ | ✓ | | | Date | | | ✓ | | ✓ | | **Superlative clauses** | No | ✓ | ✓ | ✓ | ✓ | ✓ | | | Yes | | | | ✓ | | | **Answer type** | Entity (open) | ✓ | ✓ | ✓ | ✓ | ✓ | | | Entity (closed) | | | | ✓ | ✓ | | | Number | | | ✓ | ✓ | ✓ | | | Boolean | | ✓ | ✓ | ✓ | ✓ | | **Answer cardinality** | 0 (unanswerable) | | | ✓ | | ✓ | | | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | More | | ✓ | ✓ | ✓ | ✓ | | **Number of target variables** | 0 (⇒ ASK verb) | | ✓ | ✓ | ✓ | ✓ | | | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | 2 | | | ✓ | | ✓ | | **Dialogue context** | Self-sufficient | ✓ | ✓ | ✓ | ✓ | ✓ | | | Coreference | | | | ✓ | ✓ | | | Ellipsis | | | | ✓ | ✓ | | **Meaning** | Meaningful | ✓ | ✓ | ✓ | ✓ | ✓ | | | Non-sense | | | | | ✓ | ### Data splits Text verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries. | | Train | Validation | Test | | --------------------- | ---------- | ---------- | ---------- | | Questions | 3,500 | 500 | 1,000 | | NL question per query | 1 | | Characters per query | 103 (± 27) | | Tokens per question | 10.3 (± 3.7) | ## Additional information ### Related datasets This corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely: - Non conversational datasets - [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) (from https://github.com/askplatypus/wikidata-simplequestions) - [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) (from https://github.com/barshana-banerjee/ParaQA) - [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) (from http://lc-quad.sda.tech/) - Conversational datasets - [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) (from https://amritasaha1812.github.io/CSQA/) - [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) (derived from https://gitlab.com/shimorina/webnlg-dataset/-/tree/master/release_v3.0) ### Licencing information * Content from original dataset: CC-BY 4.0 * New content: CC BY-SA 4.0 ### Citation information #### This version of the corpus (with normalized SPARQL queries) ```bibtex @inproceedings{lecorve2022sparql2text, title={SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications}, author={Lecorv\'e, Gw\'enol\'e and Veyret, Morgan and Brabant, Quentin and Rojas-Barahona, Lina M.}, journal={Proceedings of the Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP)}, year={2022} } ``` #### Original version ```bibtex @inproceedings{kacupaj2021paraqa, title={Paraqa: a question answering dataset with paraphrase responses for single-turn conversation}, author={Kacupaj, Endri and Banerjee, Barshana and Singh, Kuldeep and Lehmann, Jens}, booktitle={European semantic web conference}, pages={598--613}, year={2021}, organization={Springer} } ```
OrangeInnov/paraqa-sparqltotext
[ "task_categories:conversational", "task_categories:question-answering", "task_categories:text-generation", "task_categories:text2text-generation", "qa", "knowledge-graph", "sparql", "region:us" ]
2024-01-09T13:13:04+00:00
{"task_categories": ["conversational", "question-answering", "text-generation", "text2text-generation"], "dataset_info": {"features": [{"name": "uid", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "simplified_query", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "verbalized_answer", "dtype": "string"}, {"name": "verbalized_answer_2", "dtype": "string"}, {"name": "verbalized_answer_3", "dtype": "string"}, {"name": "verbalized_answer_4", "dtype": "string"}, {"name": "verbalized_answer_5", "dtype": "string"}, {"name": "verbalized_answer_6", "dtype": "string"}, {"name": "verbalized_answer_7", "dtype": "string"}, {"name": "verbalized_answer_8", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2540548, "num_examples": 3500}, {"name": "validation", "num_bytes": 369571, "num_examples": 500}, {"name": "test", "num_bytes": 722302, "num_examples": 1000}], "download_size": 1750172, "dataset_size": 3632421}, "tags": ["qa", "knowledge-graph", "sparql"]}
2024-01-11T13:17:48+00:00
[]
[]
TAGS #task_categories-conversational #task_categories-question-answering #task_categories-text-generation #task_categories-text2text-generation #qa #knowledge-graph #sparql #region-us
Dataset Card for ParaQA-SPARQLtoText ==================================== Table of Contents ----------------- * Dataset Card for ParaQA-SPARQLtoText + Table of Contents + Dataset Description - Dataset Summary * New field 'simplified\_query' * New split "valid" - Languages + Dataset Structure - Types of questions - Data splits + Additional information - Related datasets - Licencing information - Citation information * This version of the corpus (with normalized SPARQL queries) * Original version Dataset Description ------------------- * Paper: SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications (AACL-IJCNLP 2022) * Point of Contact: Gwénolé Lecorvé ### Dataset Summary Special version of ParaQA with SPARQL queries formatted for the SPARQL-to-Text task #### New field 'simplified\_query' New field is named "simplified\_query". It results from applying the following step on the field "query": * Replacing URIs with a simpler format with prefix "resource:", "property:" and "ontology:". * Spacing the delimiters '(', '{', '.', '}', ')'. * Randomizing the variables names * Shuffling the clauses #### New split "valid" A validation set was randonly extracted from the test set to represent 10% of the whole dataset. ### Languages * English Dataset Structure ----------------- ### Types of questions Comparison of question types compared to related datasets: ### Data splits Text verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries. Additional information ---------------------- ### Related datasets This corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely: * Non conversational datasets + SimpleQuestions (from URL + ParaQA (from URL + LC-QuAD 2.0 (from URL * Conversational datasets + CSQA (from URL + WebNLQ-QA (derived from URL ### Licencing information * Content from original dataset: CC-BY 4.0 * New content: CC BY-SA 4.0 information #### This version of the corpus (with normalized SPARQL queries) #### Original version
[ "### Dataset Summary\n\n\nSpecial version of ParaQA with SPARQL queries formatted for the SPARQL-to-Text task", "#### New field 'simplified\\_query'\n\n\nNew field is named \"simplified\\_query\". It results from applying the following step on the field \"query\":\n\n\n* Replacing URIs with a simpler format with prefix \"resource:\", \"property:\" and \"ontology:\".\n* Spacing the delimiters '(', '{', '.', '}', ')'.\n* Randomizing the variables names\n* Shuffling the clauses", "#### New split \"valid\"\n\n\nA validation set was randonly extracted from the test set to represent 10% of the whole dataset.", "### Languages\n\n\n* English\n\n\nDataset Structure\n-----------------", "### Types of questions\n\n\nComparison of question types compared to related datasets:", "### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.\n\n\n\nAdditional information\n----------------------", "### Related datasets\n\n\nThis corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely:\n\n\n* Non conversational datasets\n\t+ SimpleQuestions (from URL\n\t+ ParaQA (from URL\n\t+ LC-QuAD 2.0 (from URL\n* Conversational datasets\n\t+ CSQA (from URL\n\t+ WebNLQ-QA (derived from URL", "### Licencing information\n\n\n* Content from original dataset: CC-BY 4.0\n* New content: CC BY-SA 4.0\n\n\ninformation", "#### This version of the corpus (with normalized SPARQL queries)", "#### Original version" ]
[ "TAGS\n#task_categories-conversational #task_categories-question-answering #task_categories-text-generation #task_categories-text2text-generation #qa #knowledge-graph #sparql #region-us \n", "### Dataset Summary\n\n\nSpecial version of ParaQA with SPARQL queries formatted for the SPARQL-to-Text task", "#### New field 'simplified\\_query'\n\n\nNew field is named \"simplified\\_query\". It results from applying the following step on the field \"query\":\n\n\n* Replacing URIs with a simpler format with prefix \"resource:\", \"property:\" and \"ontology:\".\n* Spacing the delimiters '(', '{', '.', '}', ')'.\n* Randomizing the variables names\n* Shuffling the clauses", "#### New split \"valid\"\n\n\nA validation set was randonly extracted from the test set to represent 10% of the whole dataset.", "### Languages\n\n\n* English\n\n\nDataset Structure\n-----------------", "### Types of questions\n\n\nComparison of question types compared to related datasets:", "### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.\n\n\n\nAdditional information\n----------------------", "### Related datasets\n\n\nThis corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely:\n\n\n* Non conversational datasets\n\t+ SimpleQuestions (from URL\n\t+ ParaQA (from URL\n\t+ LC-QuAD 2.0 (from URL\n* Conversational datasets\n\t+ CSQA (from URL\n\t+ WebNLQ-QA (derived from URL", "### Licencing information\n\n\n* Content from original dataset: CC-BY 4.0\n* New content: CC BY-SA 4.0\n\n\ninformation", "#### This version of the corpus (with normalized SPARQL queries)", "#### Original version" ]
[ 62, 31, 116, 31, 13, 18, 56, 88, 26, 18, 4 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-question-answering #task_categories-text-generation #task_categories-text2text-generation #qa #knowledge-graph #sparql #region-us \n### Dataset Summary\n\n\nSpecial version of ParaQA with SPARQL queries formatted for the SPARQL-to-Text task#### New field 'simplified\\_query'\n\n\nNew field is named \"simplified\\_query\". It results from applying the following step on the field \"query\":\n\n\n* Replacing URIs with a simpler format with prefix \"resource:\", \"property:\" and \"ontology:\".\n* Spacing the delimiters '(', '{', '.', '}', ')'.\n* Randomizing the variables names\n* Shuffling the clauses#### New split \"valid\"\n\n\nA validation set was randonly extracted from the test set to represent 10% of the whole dataset.### Languages\n\n\n* English\n\n\nDataset Structure\n-----------------### Types of questions\n\n\nComparison of question types compared to related datasets:### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.\n\n\n\nAdditional information\n----------------------### Related datasets\n\n\nThis corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely:\n\n\n* Non conversational datasets\n\t+ SimpleQuestions (from URL\n\t+ ParaQA (from URL\n\t+ LC-QuAD 2.0 (from URL\n* Conversational datasets\n\t+ CSQA (from URL\n\t+ WebNLQ-QA (derived from URL### Licencing information\n\n\n* Content from original dataset: CC-BY 4.0\n* New content: CC BY-SA 4.0\n\n\ninformation#### This version of the corpus (with normalized SPARQL queries)#### Original version" ]
b9cb84b48cad1a543049f18888e3af04cde541b3
# Dataset Card for "myriade_noun_aligned_with_wordnet" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gguichard/myriade_noun_aligned_with_wordnet
[ "region:us" ]
2024-01-09T13:13:36+00:00
{"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "wn_sens", "sequence": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 76227656, "num_examples": 127112}], "download_size": 18764458, "dataset_size": 76227656}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-09T13:13:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "myriade_noun_aligned_with_wordnet" More Information needed
[ "# Dataset Card for \"myriade_noun_aligned_with_wordnet\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"myriade_noun_aligned_with_wordnet\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"myriade_noun_aligned_with_wordnet\"\n\nMore Information needed" ]
8c8a754a2882cc74aabf8bb5f7753931e6bb34a6
# Dataset Card for CSQA-SPARQLtoText ## Table of Contents - [Dataset Card for CSQA-SPARQLtoText](#dataset-card-for-csqa-sparqltotext) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported tasks](#supported-tasks) - [Knowledge based question-answering](#knowledge-based-question-answering) - [SPARQL queries and natural language questions](#sparql-queries-and-natural-language-questions) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Types of questions](#types-of-questions) - [Data splits](#data-splits) - [JSON fields](#json-fields) - [Original fields](#original-fields) - [New fields](#new-fields) - [Verbalized fields](#verbalized-fields) - [Format of the SPARQL queries](#format-of-the-sparql-queries) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [This version of the corpus (with SPARQL queries)](#this-version-of-the-corpus-with-sparql-queries) - [Original corpus (CSQA)](#original-corpus-csqa) - [CARTON](#carton) ## Dataset Description - **Paper:** [SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications (AACL-IJCNLP 2022)](https://aclanthology.org/2022.aacl-main.11/) - **Point of Contact:** Gwénolé Lecorvé ### Dataset Summary CSQA corpus (Complex Sequential Question-Answering, see https://amritasaha1812.github.io/CSQA/) is a large corpus for conversational knowledge-based question answering. The version here is augmented with various fields to make it easier to run specific tasks, especially SPARQL-to-text conversion. The original data has been post-processing as follows: 1. Verbalization templates were applied on the answers and their entities were verbalized (replaced by their label in Wikidata) 2. Questions were parsed using the CARTON algorithm to produce a sequence of action in a specific grammar 3. Sequence of actions were mapped to SPARQL queries and entities were verbalized (replaced by their label in Wikidata) ### Supported tasks - Knowledge-based question-answering - Text-to-SPARQL conversion #### Knowledge based question-answering Below is an example of dialogue: - Q1: Which occupation is the profession of Edmond Yernaux ? - A1: politician - Q2: Which collectable has that occupation as its principal topic ? - A2: Notitia Parliamentaria, An History of the Counties, etc. #### SPARQL queries and natural language questions ```SQL SELECT DISTINCT ?x WHERE { ?x rdf:type ontology:occupation . resource:Edmond_Yernaux property:occupation ?x } ``` is equivalent to: ```txt Which occupation is the profession of Edmond Yernaux ? ``` ### Languages - English ## Dataset Structure The corpus follows the global architecture from the original version of CSQA (https://amritasaha1812.github.io/CSQA/). There is one directory of the train, dev, and test sets, respectively. Dialogues are stored in separate directories, 100 dialogues per directory. Finally, each dialogue is stored in a JSON file as a list of turns. ### Types of questions Comparison of question types compared to related datasets: | | | [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) | [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) | [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) | [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) | [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) | |--------------------------|-----------------|:---------------:|:------:|:-----------:|:----:|:---------:| | **Number of triplets in query** | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | 2 | | ✓ | ✓ | ✓ | ✓ | | | More | | | ✓ | ✓ | ✓ | | **Logical connector between triplets** | Conjunction | ✓ | ✓ | ✓ | ✓ | ✓ | | | Disjunction | | | | ✓ | ✓ | | | Exclusion | | | | ✓ | ✓ | | **Topology of the query graph** | Direct | ✓ | ✓ | ✓ | ✓ | ✓ | | | Sibling | | ✓ | ✓ | ✓ | ✓ | | | Chain | | ✓ | ✓ | ✓ | ✓ | | | Mixed | | | ✓ | | ✓ | | | Other | | ✓ | ✓ | ✓ | ✓ | | **Variable typing in the query** | None | ✓ | ✓ | ✓ | ✓ | ✓ | | | Target variable | | ✓ | ✓ | ✓ | ✓ | | | Internal variable | | ✓ | ✓ | ✓ | ✓ | | **Comparisons clauses** | None | ✓ | ✓ | ✓ | ✓ | ✓ | | | String | | | ✓ | | ✓ | | | Number | | | ✓ | ✓ | ✓ | | | Date | | | ✓ | | ✓ | | **Superlative clauses** | No | ✓ | ✓ | ✓ | ✓ | ✓ | | | Yes | | | | ✓ | | | **Answer type** | Entity (open) | ✓ | ✓ | ✓ | ✓ | ✓ | | | Entity (closed) | | | | ✓ | ✓ | | | Number | | | ✓ | ✓ | ✓ | | | Boolean | | ✓ | ✓ | ✓ | ✓ | | **Answer cardinality** | 0 (unanswerable) | | | ✓ | | ✓ | | | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | More | | ✓ | ✓ | ✓ | ✓ | | **Number of target variables** | 0 (⇒ ASK verb) | | ✓ | ✓ | ✓ | ✓ | | | 1 | ✓ | ✓ | ✓ | ✓ | ✓ | | | 2 | | | ✓ | | ✓ | | **Dialogue context** | Self-sufficient | ✓ | ✓ | ✓ | ✓ | ✓ | | | Coreference | | | | ✓ | ✓ | | | Ellipsis | | | | ✓ | ✓ | | **Meaning** | Meaningful | ✓ | ✓ | ✓ | ✓ | ✓ | | | Non-sense | | | | | ✓ | ### Data splits Text verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries. | | Train | Validation | Test | | --------------------- | ---------- | ---------- | ---------- | | Questions | 1.5M | 167K | 260K | | Dialogues | 152K | 17K | 28K | | NL question per query | 1 | | Characters per query | 163 (± 100) | | Tokens per question | 10 (± 4) | ### JSON fields Each turn of a dialogue contains the following fields: #### Original fields * `ques_type_id`: ID corresponding to the question utterance * `description`: Description of type of question * `relations`: ID's of predicates used in the utterance * `entities_in_utterance`: ID's of entities used in the question * `speaker`: The nature of speaker: `SYSTEM` or `USER` * `utterance`: The utterance: either the question, clarification or response * `active_set`: A regular expression which identifies the entity set of answer list * `all_entities`: List of ALL entities which constitute the answer of the question * `question-type`: Type of question (broad types used for evaluation as given in the original authors' paper) * `type_list`: List containing entity IDs of all entity parents used in the question #### New fields * `is_spurious`: introduced by CARTON, * `is_incomplete`: either the question is self-sufficient (complete) or it relies on information given by the previous turns (incomplete) * `parsed_active_set`: * `gold_actions`: sequence of ACTIONs as returned by CARTON * `sparql_query`: SPARQL query #### Verbalized fields Fields with `verbalized` in their name are verbalized versions of another fields, ie IDs were replaced by actual words/labels. ### Format of the SPARQL queries * Clauses are in random order * Variables names are represented as random letters. The letters change from one turn to another. * Delimiters are spaced ## Additional Information ### Licensing Information * Content from original dataset: CC-BY-SA 4.0 * New content: CC BY-SA 4.0 ### Citation Information #### This version of the corpus (with SPARQL queries) ```bibtex @inproceedings{lecorve2022sparql2text, title={SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications}, author={Lecorv\'e, Gw\'enol\'e and Veyret, Morgan and Brabant, Quentin and Rojas-Barahona, Lina M.}, journal={Proceedings of the Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP)}, year={2022} } ``` #### Original corpus (CSQA) ```bibtex @InProceedings{saha2018complex, title = {Complex {Sequential} {Question} {Answering}: {Towards} {Learning} to {Converse} {Over} {Linked} {Question} {Answer} {Pairs} with a {Knowledge} {Graph}}, volume = {32}, issn = {2374-3468}, url = {https://ojs.aaai.org/index.php/AAAI/article/view/11332}, booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence}, author = {Saha, Amrita and Pahuja, Vardaan and Khapra, Mitesh and Sankaranarayanan, Karthik and Chandar, Sarath}, month = apr, year = {2018} } ``` #### CARTON ```bibtex @InProceedings{plepi2021context, author="Plepi, Joan and Kacupaj, Endri and Singh, Kuldeep and Thakkar, Harsh and Lehmann, Jens", editor="Verborgh, Ruben and Hose, Katja and Paulheim, Heiko and Champin, Pierre-Antoine and Maleshkova, Maria and Corcho, Oscar and Ristoski, Petar and Alam, Mehwish", title="Context Transformer with Stacked Pointer Networks for Conversational Question Answering over Knowledge Graphs", booktitle="Proceedings of The Semantic Web", year="2021", publisher="Springer International Publishing", pages="356--371", isbn="978-3-030-77385-4" } ```
OrangeInnov/csqa-sparqltotext
[ "task_categories:conversational", "task_categories:question-answering", "language:en", "license:cc-by-sa-4.0", "qa", "knowledge-graph", "sparql", "multi-hop", "region:us" ]
2024-01-09T13:19:54+00:00
{"language": ["en"], "license": "cc-by-sa-4.0", "task_categories": ["conversational", "question-answering"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "turns", "list": [{"name": "id", "dtype": "int64"}, {"name": "ques_type_id", "dtype": "int64"}, {"name": "question-type", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "entities_in_utterance", "list": "string"}, {"name": "relations", "list": "string"}, {"name": "type_list", "list": "string"}, {"name": "speaker", "dtype": "string"}, {"name": "utterance", "dtype": "string"}, {"name": "all_entities", "list": "string"}, {"name": "active_set", "list": "string"}, {"name": "sec_ques_sub_type", "dtype": "int64"}, {"name": "sec_ques_type", "dtype": "int64"}, {"name": "set_op_choice", "dtype": "int64"}, {"name": "is_inc", "dtype": "int64"}, {"name": "count_ques_sub_type", "dtype": "int64"}, {"name": "count_ques_type", "dtype": "int64"}, {"name": "is_incomplete", "dtype": "int64"}, {"name": "inc_ques_type", "dtype": "int64"}, {"name": "set_op", "dtype": "int64"}, {"name": "bool_ques_type", "dtype": "int64"}, {"name": "entities", "list": "string"}, {"name": "clarification_step", "dtype": "int64"}, {"name": "gold_actions", "list": {"list": "string"}}, {"name": "is_spurious", "dtype": "bool"}, {"name": "masked_verbalized_answer", "dtype": "string"}, {"name": "parsed_active_set", "list": "string"}, {"name": "sparql_query", "dtype": "string"}, {"name": "verbalized_all_entities", "list": "string"}, {"name": "verbalized_answer", "dtype": "string"}, {"name": "verbalized_entities_in_utterance", "list": "string"}, {"name": "verbalized_gold_actions", "list": {"list": "string"}}, {"name": "verbalized_parsed_active_set", "list": "string"}, {"name": "verbalized_sparql_query", "dtype": "string"}, {"name": "verbalized_triple", "dtype": "string"}, {"name": "verbalized_type_list", "list": "string"}]}], "splits": [{"name": "train", "num_bytes": 6815016095, "num_examples": 152391}, {"name": "test", "num_bytes": 1007873839, "num_examples": 27797}, {"name": "validation", "num_bytes": 692344634, "num_examples": 16813}], "download_size": 2406342185, "dataset_size": 8515234568}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "tags": ["qa", "knowledge-graph", "sparql", "multi-hop"]}
2024-01-11T13:15:33+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #task_categories-question-answering #language-English #license-cc-by-sa-4.0 #qa #knowledge-graph #sparql #multi-hop #region-us
Dataset Card for CSQA-SPARQLtoText ================================== Table of Contents ----------------- * Dataset Card for CSQA-SPARQLtoText + Table of Contents + Dataset Description - Dataset Summary - Supported tasks * Knowledge based question-answering * SPARQL queries and natural language questions - Languages + Dataset Structure - Types of questions - Data splits - JSON fields * Original fields * New fields * Verbalized fields - Format of the SPARQL queries + Additional Information - Licensing Information - Citation Information * This version of the corpus (with SPARQL queries) * Original corpus (CSQA) * CARTON Dataset Description ------------------- * Paper: SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications (AACL-IJCNLP 2022) * Point of Contact: Gwénolé Lecorvé ### Dataset Summary CSQA corpus (Complex Sequential Question-Answering, see URL is a large corpus for conversational knowledge-based question answering. The version here is augmented with various fields to make it easier to run specific tasks, especially SPARQL-to-text conversion. The original data has been post-processing as follows: 1. Verbalization templates were applied on the answers and their entities were verbalized (replaced by their label in Wikidata) 2. Questions were parsed using the CARTON algorithm to produce a sequence of action in a specific grammar 3. Sequence of actions were mapped to SPARQL queries and entities were verbalized (replaced by their label in Wikidata) ### Supported tasks * Knowledge-based question-answering * Text-to-SPARQL conversion #### Knowledge based question-answering Below is an example of dialogue: * Q1: Which occupation is the profession of Edmond Yernaux ? * A1: politician * Q2: Which collectable has that occupation as its principal topic ? * A2: Notitia Parliamentaria, An History of the Counties, etc. #### SPARQL queries and natural language questions is equivalent to: ### Languages * English Dataset Structure ----------------- The corpus follows the global architecture from the original version of CSQA (URL There is one directory of the train, dev, and test sets, respectively. Dialogues are stored in separate directories, 100 dialogues per directory. Finally, each dialogue is stored in a JSON file as a list of turns. ### Types of questions Comparison of question types compared to related datasets: ### Data splits Text verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries. ### JSON fields Each turn of a dialogue contains the following fields: #### Original fields * 'ques\_type\_id': ID corresponding to the question utterance * 'description': Description of type of question * 'relations': ID's of predicates used in the utterance * 'entities\_in\_utterance': ID's of entities used in the question * 'speaker': The nature of speaker: 'SYSTEM' or 'USER' * 'utterance': The utterance: either the question, clarification or response * 'active\_set': A regular expression which identifies the entity set of answer list * 'all\_entities': List of ALL entities which constitute the answer of the question * 'question-type': Type of question (broad types used for evaluation as given in the original authors' paper) * 'type\_list': List containing entity IDs of all entity parents used in the question #### New fields * 'is\_spurious': introduced by CARTON, * 'is\_incomplete': either the question is self-sufficient (complete) or it relies on information given by the previous turns (incomplete) * 'parsed\_active\_set': * 'gold\_actions': sequence of ACTIONs as returned by CARTON * 'sparql\_query': SPARQL query #### Verbalized fields Fields with 'verbalized' in their name are verbalized versions of another fields, ie IDs were replaced by actual words/labels. ### Format of the SPARQL queries * Clauses are in random order * Variables names are represented as random letters. The letters change from one turn to another. * Delimiters are spaced Additional Information ---------------------- ### Licensing Information * Content from original dataset: CC-BY-SA 4.0 * New content: CC BY-SA 4.0 #### This version of the corpus (with SPARQL queries) #### Original corpus (CSQA) #### CARTON
[ "### Dataset Summary\n\n\nCSQA corpus (Complex Sequential Question-Answering, see URL is a large corpus for conversational knowledge-based question answering. The version here is augmented with various fields to make it easier to run specific tasks, especially SPARQL-to-text conversion.\n\n\nThe original data has been post-processing as follows:\n\n\n1. Verbalization templates were applied on the answers and their entities were verbalized (replaced by their label in Wikidata)\n2. Questions were parsed using the CARTON algorithm to produce a sequence of action in a specific grammar\n3. Sequence of actions were mapped to SPARQL queries and entities were verbalized (replaced by their label in Wikidata)", "### Supported tasks\n\n\n* Knowledge-based question-answering\n* Text-to-SPARQL conversion", "#### Knowledge based question-answering\n\n\nBelow is an example of dialogue:\n\n\n* Q1: Which occupation is the profession of Edmond Yernaux ?\n* A1: politician\n* Q2: Which collectable has that occupation as its principal topic ?\n* A2: Notitia Parliamentaria, An History of the Counties, etc.", "#### SPARQL queries and natural language questions\n\n\nis equivalent to:", "### Languages\n\n\n* English\n\n\nDataset Structure\n-----------------\n\n\nThe corpus follows the global architecture from the original version of CSQA (URL\n\n\nThere is one directory of the train, dev, and test sets, respectively.\n\n\nDialogues are stored in separate directories, 100 dialogues per directory.\n\n\nFinally, each dialogue is stored in a JSON file as a list of turns.", "### Types of questions\n\n\nComparison of question types compared to related datasets:", "### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.", "### JSON fields\n\n\nEach turn of a dialogue contains the following fields:", "#### Original fields\n\n\n* 'ques\\_type\\_id': ID corresponding to the question utterance\n* 'description': Description of type of question\n* 'relations': ID's of predicates used in the utterance\n* 'entities\\_in\\_utterance': ID's of entities used in the question\n* 'speaker': The nature of speaker: 'SYSTEM' or 'USER'\n* 'utterance': The utterance: either the question, clarification or response\n* 'active\\_set': A regular expression which identifies the entity set of answer list\n* 'all\\_entities': List of ALL entities which constitute the answer of the question\n* 'question-type': Type of question (broad types used for evaluation as given in the original authors' paper)\n* 'type\\_list': List containing entity IDs of all entity parents used in the question", "#### New fields\n\n\n* 'is\\_spurious': introduced by CARTON,\n* 'is\\_incomplete': either the question is self-sufficient (complete) or it relies on information given by the previous turns (incomplete)\n* 'parsed\\_active\\_set':\n* 'gold\\_actions': sequence of ACTIONs as returned by CARTON\n* 'sparql\\_query': SPARQL query", "#### Verbalized fields\n\n\nFields with 'verbalized' in their name are verbalized versions of another fields, ie IDs were replaced by actual words/labels.", "### Format of the SPARQL queries\n\n\n* Clauses are in random order\n* Variables names are represented as random letters. The letters change from one turn to another.\n* Delimiters are spaced\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\n* Content from original dataset: CC-BY-SA 4.0\n* New content: CC BY-SA 4.0", "#### This version of the corpus (with SPARQL queries)", "#### Original corpus (CSQA)", "#### CARTON" ]
[ "TAGS\n#task_categories-conversational #task_categories-question-answering #language-English #license-cc-by-sa-4.0 #qa #knowledge-graph #sparql #multi-hop #region-us \n", "### Dataset Summary\n\n\nCSQA corpus (Complex Sequential Question-Answering, see URL is a large corpus for conversational knowledge-based question answering. The version here is augmented with various fields to make it easier to run specific tasks, especially SPARQL-to-text conversion.\n\n\nThe original data has been post-processing as follows:\n\n\n1. Verbalization templates were applied on the answers and their entities were verbalized (replaced by their label in Wikidata)\n2. Questions were parsed using the CARTON algorithm to produce a sequence of action in a specific grammar\n3. Sequence of actions were mapped to SPARQL queries and entities were verbalized (replaced by their label in Wikidata)", "### Supported tasks\n\n\n* Knowledge-based question-answering\n* Text-to-SPARQL conversion", "#### Knowledge based question-answering\n\n\nBelow is an example of dialogue:\n\n\n* Q1: Which occupation is the profession of Edmond Yernaux ?\n* A1: politician\n* Q2: Which collectable has that occupation as its principal topic ?\n* A2: Notitia Parliamentaria, An History of the Counties, etc.", "#### SPARQL queries and natural language questions\n\n\nis equivalent to:", "### Languages\n\n\n* English\n\n\nDataset Structure\n-----------------\n\n\nThe corpus follows the global architecture from the original version of CSQA (URL\n\n\nThere is one directory of the train, dev, and test sets, respectively.\n\n\nDialogues are stored in separate directories, 100 dialogues per directory.\n\n\nFinally, each dialogue is stored in a JSON file as a list of turns.", "### Types of questions\n\n\nComparison of question types compared to related datasets:", "### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.", "### JSON fields\n\n\nEach turn of a dialogue contains the following fields:", "#### Original fields\n\n\n* 'ques\\_type\\_id': ID corresponding to the question utterance\n* 'description': Description of type of question\n* 'relations': ID's of predicates used in the utterance\n* 'entities\\_in\\_utterance': ID's of entities used in the question\n* 'speaker': The nature of speaker: 'SYSTEM' or 'USER'\n* 'utterance': The utterance: either the question, clarification or response\n* 'active\\_set': A regular expression which identifies the entity set of answer list\n* 'all\\_entities': List of ALL entities which constitute the answer of the question\n* 'question-type': Type of question (broad types used for evaluation as given in the original authors' paper)\n* 'type\\_list': List containing entity IDs of all entity parents used in the question", "#### New fields\n\n\n* 'is\\_spurious': introduced by CARTON,\n* 'is\\_incomplete': either the question is self-sufficient (complete) or it relies on information given by the previous turns (incomplete)\n* 'parsed\\_active\\_set':\n* 'gold\\_actions': sequence of ACTIONs as returned by CARTON\n* 'sparql\\_query': SPARQL query", "#### Verbalized fields\n\n\nFields with 'verbalized' in their name are verbalized versions of another fields, ie IDs were replaced by actual words/labels.", "### Format of the SPARQL queries\n\n\n* Clauses are in random order\n* Variables names are represented as random letters. The letters change from one turn to another.\n* Delimiters are spaced\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\n* Content from original dataset: CC-BY-SA 4.0\n* New content: CC BY-SA 4.0", "#### This version of the corpus (with SPARQL queries)", "#### Original corpus (CSQA)", "#### CARTON" ]
[ 57, 169, 24, 72, 16, 85, 18, 49, 18, 216, 108, 42, 52, 28, 16, 8, 4 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-question-answering #language-English #license-cc-by-sa-4.0 #qa #knowledge-graph #sparql #multi-hop #region-us \n### Dataset Summary\n\n\nCSQA corpus (Complex Sequential Question-Answering, see URL is a large corpus for conversational knowledge-based question answering. The version here is augmented with various fields to make it easier to run specific tasks, especially SPARQL-to-text conversion.\n\n\nThe original data has been post-processing as follows:\n\n\n1. Verbalization templates were applied on the answers and their entities were verbalized (replaced by their label in Wikidata)\n2. Questions were parsed using the CARTON algorithm to produce a sequence of action in a specific grammar\n3. Sequence of actions were mapped to SPARQL queries and entities were verbalized (replaced by their label in Wikidata)### Supported tasks\n\n\n* Knowledge-based question-answering\n* Text-to-SPARQL conversion#### Knowledge based question-answering\n\n\nBelow is an example of dialogue:\n\n\n* Q1: Which occupation is the profession of Edmond Yernaux ?\n* A1: politician\n* Q2: Which collectable has that occupation as its principal topic ?\n* A2: Notitia Parliamentaria, An History of the Counties, etc.#### SPARQL queries and natural language questions\n\n\nis equivalent to:### Languages\n\n\n* English\n\n\nDataset Structure\n-----------------\n\n\nThe corpus follows the global architecture from the original version of CSQA (URL\n\n\nThere is one directory of the train, dev, and test sets, respectively.\n\n\nDialogues are stored in separate directories, 100 dialogues per directory.\n\n\nFinally, each dialogue is stored in a JSON file as a list of turns.### Types of questions\n\n\nComparison of question types compared to related datasets:### Data splits\n\n\nText verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries." ]
2fde2659322f4d7d006c3ab7a4a08b2c12595324
# Jenny TTS Dataset A high-quality, varied ~30hr voice dataset suitable for training a TTS model. Voice is recorded by Jenny. She's Irish. Material read include: - Newspaper headlines - Transcripts of various Youtube videos - About 2/3 of the book '1984' - Some of the book 'Little Women' - Wikipedia articles, different topics (philosophy, history, science) - Recipes - Reddit comments - Song lyrics, including rap lyrics - Transcripts to the show 'Friends' Audio files are 48khz, 16-bit PCM files, 2 Channels (a single microphone was used.. hmm). Some light preprocessing was done when the text was taken from the raw sources. A breakdown of where different material starts and ends can be reconstructed. Further information to follow. # Important The audiofiles are raw from the microphone, not trimmed. In some cases there are a few seconds of silence, sometimes a light 'knock' is audible at the beginning of the clip, where Jenny was hitting the start key. These issues will need to be addressed before training a TTS model. I'm a bit short on time these days, help welcome. License - Attribution is required in software/websites/projects/interfaces (including voice interfaces) that generate audio in response to user action using this dataset. Atribution means: the voice must be referred to as "Jenny", and where at all practical, "Jenny (Dioco)". Attribution is not required when distributing the generated clips (although welcome). Commercial use is permitted. Don't do unfair things like claim the dataset is your own. No further restrictions apply. Jenny is available to produce further recordings for your own use. Mail [email protected]
reach-vb/jenny_tts_dataset
[ "region:us" ]
2024-01-09T13:45:22+00:00
{"dataset_info": {"features": [{"name": "file_name", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "transcription_normalised", "dtype": "string"}, {"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 4983072167.73, "num_examples": 20978}], "download_size": 3741291896, "dataset_size": 4983072167.73}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-09T14:11:57+00:00
[]
[]
TAGS #region-us
# Jenny TTS Dataset A high-quality, varied ~30hr voice dataset suitable for training a TTS model. Voice is recorded by Jenny. She's Irish. Material read include: - Newspaper headlines - Transcripts of various Youtube videos - About 2/3 of the book '1984' - Some of the book 'Little Women' - Wikipedia articles, different topics (philosophy, history, science) - Recipes - Reddit comments - Song lyrics, including rap lyrics - Transcripts to the show 'Friends' Audio files are 48khz, 16-bit PCM files, 2 Channels (a single microphone was used.. hmm). Some light preprocessing was done when the text was taken from the raw sources. A breakdown of where different material starts and ends can be reconstructed. Further information to follow. # Important The audiofiles are raw from the microphone, not trimmed. In some cases there are a few seconds of silence, sometimes a light 'knock' is audible at the beginning of the clip, where Jenny was hitting the start key. These issues will need to be addressed before training a TTS model. I'm a bit short on time these days, help welcome. License - Attribution is required in software/websites/projects/interfaces (including voice interfaces) that generate audio in response to user action using this dataset. Atribution means: the voice must be referred to as "Jenny", and where at all practical, "Jenny (Dioco)". Attribution is not required when distributing the generated clips (although welcome). Commercial use is permitted. Don't do unfair things like claim the dataset is your own. No further restrictions apply. Jenny is available to produce further recordings for your own use. Mail dioco@URL
[ "# Jenny TTS Dataset\n\nA high-quality, varied ~30hr voice dataset suitable for training a TTS model.\n\nVoice is recorded by Jenny. She's Irish.\n\nMaterial read include:\n- Newspaper headlines\n- Transcripts of various Youtube videos\n- About 2/3 of the book '1984'\n- Some of the book 'Little Women'\n- Wikipedia articles, different topics (philosophy, history, science)\n- Recipes\n- Reddit comments\n- Song lyrics, including rap lyrics\n- Transcripts to the show 'Friends'\n\nAudio files are 48khz, 16-bit PCM files, 2 Channels (a single microphone was used.. hmm).\n\nSome light preprocessing was done when the text was taken from the raw sources. A breakdown of where different material starts and ends can be reconstructed. Further information to follow.", "# Important\n\nThe audiofiles are raw from the microphone, not trimmed. In some cases there are a few seconds of silence, sometimes a light 'knock' is audible at the beginning of the clip, where Jenny was hitting the start key. These issues will need to be addressed before training a TTS model. I'm a bit short on time these days, help welcome.\n\nLicense - Attribution is required in software/websites/projects/interfaces (including voice interfaces) that generate audio in response to user action using this dataset. Atribution means: the voice must be referred to as \"Jenny\", and where at all practical, \"Jenny (Dioco)\". Attribution is not required when distributing the generated clips (although welcome). Commercial use is permitted. Don't do unfair things like claim the dataset is your own. No further restrictions apply.\n\nJenny is available to produce further recordings for your own use. Mail dioco@URL" ]
[ "TAGS\n#region-us \n", "# Jenny TTS Dataset\n\nA high-quality, varied ~30hr voice dataset suitable for training a TTS model.\n\nVoice is recorded by Jenny. She's Irish.\n\nMaterial read include:\n- Newspaper headlines\n- Transcripts of various Youtube videos\n- About 2/3 of the book '1984'\n- Some of the book 'Little Women'\n- Wikipedia articles, different topics (philosophy, history, science)\n- Recipes\n- Reddit comments\n- Song lyrics, including rap lyrics\n- Transcripts to the show 'Friends'\n\nAudio files are 48khz, 16-bit PCM files, 2 Channels (a single microphone was used.. hmm).\n\nSome light preprocessing was done when the text was taken from the raw sources. A breakdown of where different material starts and ends can be reconstructed. Further information to follow.", "# Important\n\nThe audiofiles are raw from the microphone, not trimmed. In some cases there are a few seconds of silence, sometimes a light 'knock' is audible at the beginning of the clip, where Jenny was hitting the start key. These issues will need to be addressed before training a TTS model. I'm a bit short on time these days, help welcome.\n\nLicense - Attribution is required in software/websites/projects/interfaces (including voice interfaces) that generate audio in response to user action using this dataset. Atribution means: the voice must be referred to as \"Jenny\", and where at all practical, \"Jenny (Dioco)\". Attribution is not required when distributing the generated clips (although welcome). Commercial use is permitted. Don't do unfair things like claim the dataset is your own. No further restrictions apply.\n\nJenny is available to produce further recordings for your own use. Mail dioco@URL" ]
[ 6, 181, 218 ]
[ "passage: TAGS\n#region-us \n# Jenny TTS Dataset\n\nA high-quality, varied ~30hr voice dataset suitable for training a TTS model.\n\nVoice is recorded by Jenny. She's Irish.\n\nMaterial read include:\n- Newspaper headlines\n- Transcripts of various Youtube videos\n- About 2/3 of the book '1984'\n- Some of the book 'Little Women'\n- Wikipedia articles, different topics (philosophy, history, science)\n- Recipes\n- Reddit comments\n- Song lyrics, including rap lyrics\n- Transcripts to the show 'Friends'\n\nAudio files are 48khz, 16-bit PCM files, 2 Channels (a single microphone was used.. hmm).\n\nSome light preprocessing was done when the text was taken from the raw sources. A breakdown of where different material starts and ends can be reconstructed. Further information to follow.# Important\n\nThe audiofiles are raw from the microphone, not trimmed. In some cases there are a few seconds of silence, sometimes a light 'knock' is audible at the beginning of the clip, where Jenny was hitting the start key. These issues will need to be addressed before training a TTS model. I'm a bit short on time these days, help welcome.\n\nLicense - Attribution is required in software/websites/projects/interfaces (including voice interfaces) that generate audio in response to user action using this dataset. Atribution means: the voice must be referred to as \"Jenny\", and where at all practical, \"Jenny (Dioco)\". Attribution is not required when distributing the generated clips (although welcome). Commercial use is permitted. Don't do unfair things like claim the dataset is your own. No further restrictions apply.\n\nJenny is available to produce further recordings for your own use. Mail dioco@URL" ]
12e742bcb29f911a18e5a2b8ed64c53a762c69dd
# OmniVid Youtube Video: 24,037,110
vivym/OmniVid
[ "task_categories:text-to-video", "license:apache-2.0", "region:us" ]
2024-01-09T13:55:33+00:00
{"license": "apache-2.0", "task_categories": ["text-to-video"]}
2024-01-09T15:17:18+00:00
[]
[]
TAGS #task_categories-text-to-video #license-apache-2.0 #region-us
# OmniVid Youtube Video: 24,037,110
[ "# OmniVid\n\nYoutube Video: 24,037,110" ]
[ "TAGS\n#task_categories-text-to-video #license-apache-2.0 #region-us \n", "# OmniVid\n\nYoutube Video: 24,037,110" ]
[ 26, 13 ]
[ "passage: TAGS\n#task_categories-text-to-video #license-apache-2.0 #region-us \n# OmniVid\n\nYoutube Video: 24,037,110" ]
6fc7b83c26c296d0fac0ea018b32f0ba1573ff54
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
multiverse-io/test
[ "region:us" ]
2024-01-09T15:02:52+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]}
2024-01-09T15:02:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 8, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
5e75314952ecb775177279f7acce83be1197dd44
# Dataset used to better understand how to: ## Correct Multi-Hop Reasoning Failures during Inference in Transformer-Based Language Models This is a handwritten dataset created to aid in better understanding the multi-hop reasoning capabilities of LLMs. To learn how the dataset was constructed please check out the project page, paper, and demo linked below. This is the link to the [Project Page](https://msakarvadia.github.io/memory_injections/). This repo contains the code that was used to conduct the experiments in this [paper](https://arxiv.org/abs/2309.05605). To get a quick introduction to the methods used in this work, checkout this [`demo`](https://colab.research.google.com/drive/1H1jjrdMDRoGj5qRGvAuWuwq1dgIDWjQw?usp=sharing). This demo is also linked under the `demos` folder in this repo. Answering multi-hop reasoning questions requires retrieving and synthesizing information from diverse sources. Large Language Models (LLMs) struggle to perform such reasoning consistently. Here we propose an approach to pinpoint and rectify multi-hop reasoning failures through targeted memory injections on LLM attention heads. First, we analyze the per-layer activations of GPT-2 models in response to single and multi-hop prompts. We then propose a mechanism that allows users to inject pertinent prompt-specific information, which we refer to as "memories," at critical LLM locations during inference. By thus enabling the LLM to incorporate additional relevant information during inference, we enhance the quality of multi-hop prompt completions. We show empirically that a simple, efficient, and targeted memory injection into a key attention layer can often increase the probability of the desired next token in multi-hop tasks, by up to 424%. ![picture](https://drive.google.com/uc?export=view&id=11PXMPvywR_ZtQNLM615-KB7ltfc0yivM) ## Citation If you use this dataset, please cite our work as: ``` @article{sakarvadia2023memory, title={Memory Injections: Correcting Multi-Hop Reasoning Failures during Inference in Transformer-Based Language Models}, author={Sakarvadia, Mansi and Ajith, Aswathy and Khan, Arham and Grzenda, Daniel and Hudson, Nathaniel and Bauer, Andr{\'e} and Chard, Kyle and Foster, Ian}, journal={arXiv preprint arXiv:2309.05605}, year={2023} } ```
msakarvadia/handwritten_multihop_reasoning_data
[ "license:mit", "arxiv:2309.05605", "region:us" ]
2024-01-09T15:23:39+00:00
{"license": "mit"}
2024-01-09T15:30:53+00:00
[ "2309.05605" ]
[]
TAGS #license-mit #arxiv-2309.05605 #region-us
# Dataset used to better understand how to: ## Correct Multi-Hop Reasoning Failures during Inference in Transformer-Based Language Models This is a handwritten dataset created to aid in better understanding the multi-hop reasoning capabilities of LLMs. To learn how the dataset was constructed please check out the project page, paper, and demo linked below. This is the link to the Project Page. This repo contains the code that was used to conduct the experiments in this paper. To get a quick introduction to the methods used in this work, checkout this 'demo'. This demo is also linked under the 'demos' folder in this repo. Answering multi-hop reasoning questions requires retrieving and synthesizing information from diverse sources. Large Language Models (LLMs) struggle to perform such reasoning consistently. Here we propose an approach to pinpoint and rectify multi-hop reasoning failures through targeted memory injections on LLM attention heads. First, we analyze the per-layer activations of GPT-2 models in response to single and multi-hop prompts. We then propose a mechanism that allows users to inject pertinent prompt-specific information, which we refer to as "memories," at critical LLM locations during inference. By thus enabling the LLM to incorporate additional relevant information during inference, we enhance the quality of multi-hop prompt completions. We show empirically that a simple, efficient, and targeted memory injection into a key attention layer can often increase the probability of the desired next token in multi-hop tasks, by up to 424%. !picture If you use this dataset, please cite our work as:
[ "# Dataset used to better understand how to:", "## Correct Multi-Hop Reasoning Failures during Inference in Transformer-Based Language Models\n\nThis is a handwritten dataset created to aid in better understanding the multi-hop reasoning capabilities of LLMs. \nTo learn how the dataset was constructed please check out the project page, paper, and demo linked below.\n\nThis is the link to the Project Page.\n\nThis repo contains the code that was used to conduct the experiments in this paper.\n\nTo get a quick introduction to the methods used in this work, checkout this 'demo'. This demo is also linked under the 'demos' folder in this repo.\n\nAnswering multi-hop reasoning questions requires retrieving and synthesizing information from diverse sources. Large Language Models (LLMs) struggle to perform such reasoning consistently. Here we propose an approach to pinpoint and rectify multi-hop reasoning failures through targeted memory injections on LLM attention heads. First, we analyze the per-layer activations of GPT-2 models in response to single and multi-hop prompts. We then propose a mechanism that allows users to inject pertinent prompt-specific information, which we refer to as \"memories,\" at critical LLM locations during inference. By thus enabling the LLM to incorporate additional relevant information during inference, we enhance the quality of multi-hop prompt completions. We show empirically that a simple, efficient, and targeted memory injection into a key attention layer can often increase the probability of the desired next token in multi-hop tasks, by up to 424%.\n\n!picture\n\nIf you use this dataset, please cite our work as:" ]
[ "TAGS\n#license-mit #arxiv-2309.05605 #region-us \n", "# Dataset used to better understand how to:", "## Correct Multi-Hop Reasoning Failures during Inference in Transformer-Based Language Models\n\nThis is a handwritten dataset created to aid in better understanding the multi-hop reasoning capabilities of LLMs. \nTo learn how the dataset was constructed please check out the project page, paper, and demo linked below.\n\nThis is the link to the Project Page.\n\nThis repo contains the code that was used to conduct the experiments in this paper.\n\nTo get a quick introduction to the methods used in this work, checkout this 'demo'. This demo is also linked under the 'demos' folder in this repo.\n\nAnswering multi-hop reasoning questions requires retrieving and synthesizing information from diverse sources. Large Language Models (LLMs) struggle to perform such reasoning consistently. Here we propose an approach to pinpoint and rectify multi-hop reasoning failures through targeted memory injections on LLM attention heads. First, we analyze the per-layer activations of GPT-2 models in response to single and multi-hop prompts. We then propose a mechanism that allows users to inject pertinent prompt-specific information, which we refer to as \"memories,\" at critical LLM locations during inference. By thus enabling the LLM to incorporate additional relevant information during inference, we enhance the quality of multi-hop prompt completions. We show empirically that a simple, efficient, and targeted memory injection into a key attention layer can often increase the probability of the desired next token in multi-hop tasks, by up to 424%.\n\n!picture\n\nIf you use this dataset, please cite our work as:" ]
[ 19, 10, 371 ]
[ "passage: TAGS\n#license-mit #arxiv-2309.05605 #region-us \n# Dataset used to better understand how to:## Correct Multi-Hop Reasoning Failures during Inference in Transformer-Based Language Models\n\nThis is a handwritten dataset created to aid in better understanding the multi-hop reasoning capabilities of LLMs. \nTo learn how the dataset was constructed please check out the project page, paper, and demo linked below.\n\nThis is the link to the Project Page.\n\nThis repo contains the code that was used to conduct the experiments in this paper.\n\nTo get a quick introduction to the methods used in this work, checkout this 'demo'. This demo is also linked under the 'demos' folder in this repo.\n\nAnswering multi-hop reasoning questions requires retrieving and synthesizing information from diverse sources. Large Language Models (LLMs) struggle to perform such reasoning consistently. Here we propose an approach to pinpoint and rectify multi-hop reasoning failures through targeted memory injections on LLM attention heads. First, we analyze the per-layer activations of GPT-2 models in response to single and multi-hop prompts. We then propose a mechanism that allows users to inject pertinent prompt-specific information, which we refer to as \"memories,\" at critical LLM locations during inference. By thus enabling the LLM to incorporate additional relevant information during inference, we enhance the quality of multi-hop prompt completions. We show empirically that a simple, efficient, and targeted memory injection into a key attention layer can often increase the probability of the desired next token in multi-hop tasks, by up to 424%.\n\n!picture\n\nIf you use this dataset, please cite our work as:" ]
c0c1f6730b97f215c74a5625cd8212f5923bcad9
## Toxic-DPO This is a highly toxic, "harmful" dataset meant to illustrate how DPO can be used to de-censor/unalign a model quite easily using direct-preference-optimization (DPO) using very few examples. Many of the examples still contain some amount of warnings/disclaimers, so it's still somewhat editorialized. ## Usage restriction To use this data, you must acknowledge/agree to the following: - data contained within is "toxic"/"harmful", and contains profanity and other types of sensitive content - none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs automatically - you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws - you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities This dataset is meant __*exclusively*__ for academic/research or other non-nefarious use-cases.
unalignment/toxic-dpo-v0.2
[ "license:cc-by-4.0", "not-for-all-audiences", "region:us" ]
2024-01-09T15:24:20+00:00
{"license": "cc-by-4.0", "tags": ["not-for-all-audiences"]}
2024-01-09T16:37:22+00:00
[]
[]
TAGS #license-cc-by-4.0 #not-for-all-audiences #region-us
## Toxic-DPO This is a highly toxic, "harmful" dataset meant to illustrate how DPO can be used to de-censor/unalign a model quite easily using direct-preference-optimization (DPO) using very few examples. Many of the examples still contain some amount of warnings/disclaimers, so it's still somewhat editorialized. ## Usage restriction To use this data, you must acknowledge/agree to the following: - data contained within is "toxic"/"harmful", and contains profanity and other types of sensitive content - none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs automatically - you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws - you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities This dataset is meant __*exclusively*__ for academic/research or other non-nefarious use-cases.
[ "## Toxic-DPO\n\nThis is a highly toxic, \"harmful\" dataset meant to illustrate how DPO can be used to de-censor/unalign a model quite easily using direct-preference-optimization (DPO) using very few examples.\n\nMany of the examples still contain some amount of warnings/disclaimers, so it's still somewhat editorialized.", "## Usage restriction\n\nTo use this data, you must acknowledge/agree to the following:\n- data contained within is \"toxic\"/\"harmful\", and contains profanity and other types of sensitive content\n- none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs automatically\n- you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws\n- you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities\n\nThis dataset is meant __*exclusively*__ for academic/research or other non-nefarious use-cases." ]
[ "TAGS\n#license-cc-by-4.0 #not-for-all-audiences #region-us \n", "## Toxic-DPO\n\nThis is a highly toxic, \"harmful\" dataset meant to illustrate how DPO can be used to de-censor/unalign a model quite easily using direct-preference-optimization (DPO) using very few examples.\n\nMany of the examples still contain some amount of warnings/disclaimers, so it's still somewhat editorialized.", "## Usage restriction\n\nTo use this data, you must acknowledge/agree to the following:\n- data contained within is \"toxic\"/\"harmful\", and contains profanity and other types of sensitive content\n- none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs automatically\n- you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws\n- you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities\n\nThis dataset is meant __*exclusively*__ for academic/research or other non-nefarious use-cases." ]
[ 24, 86, 169 ]
[ "passage: TAGS\n#license-cc-by-4.0 #not-for-all-audiences #region-us \n## Toxic-DPO\n\nThis is a highly toxic, \"harmful\" dataset meant to illustrate how DPO can be used to de-censor/unalign a model quite easily using direct-preference-optimization (DPO) using very few examples.\n\nMany of the examples still contain some amount of warnings/disclaimers, so it's still somewhat editorialized.## Usage restriction\n\nTo use this data, you must acknowledge/agree to the following:\n- data contained within is \"toxic\"/\"harmful\", and contains profanity and other types of sensitive content\n- none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs automatically\n- you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws\n- you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities\n\nThis dataset is meant __*exclusively*__ for academic/research or other non-nefarious use-cases." ]
96e6b55a12167d19e252df0da1a0271efd258015
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
AnaLFDias/spotify
[ "region:us" ]
2024-01-09T15:49:02+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 90179.0, "num_examples": 4}], "download_size": 91938, "dataset_size": 90179.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-10T12:27:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
39c4e2a0d8b96728c40bf3ff4923c48f163e9fe9
# Lojban (jbo) wikipedia dataset dump the dump was taken from here https://dumps.wikimedia.org/jbowiktionary/20240101/ the exact release is jbowiktionary-20240101-pages-meta-current.xml.bz2 I then processed the data using https://github.com/daveshap/PlainTextWikipedia Then I used chatGPT to make a python script to convert the individual JSON files output by the tool into a single jsonl file.
Badbummer/lojban-wikipedia-dump
[ "license:unknown", "region:us" ]
2024-01-09T15:55:36+00:00
{"license": "unknown"}
2024-01-09T16:00:58+00:00
[]
[]
TAGS #license-unknown #region-us
# Lojban (jbo) wikipedia dataset dump the dump was taken from here URL the exact release is URL.bz2 I then processed the data using URL Then I used chatGPT to make a python script to convert the individual JSON files output by the tool into a single jsonl file.
[ "# Lojban (jbo) wikipedia dataset dump\n\nthe dump was taken from here\nURL\n\nthe exact release is URL.bz2\n\nI then processed the data using URL\n\nThen I used chatGPT to make a python script to convert the individual JSON files output by the tool into a single jsonl file." ]
[ "TAGS\n#license-unknown #region-us \n", "# Lojban (jbo) wikipedia dataset dump\n\nthe dump was taken from here\nURL\n\nthe exact release is URL.bz2\n\nI then processed the data using URL\n\nThen I used chatGPT to make a python script to convert the individual JSON files output by the tool into a single jsonl file." ]
[ 13, 70 ]
[ "passage: TAGS\n#license-unknown #region-us \n# Lojban (jbo) wikipedia dataset dump\n\nthe dump was taken from here\nURL\n\nthe exact release is URL.bz2\n\nI then processed the data using URL\n\nThen I used chatGPT to make a python script to convert the individual JSON files output by the tool into a single jsonl file." ]
6bc389ef6a6ed4ac688e9e3f1bad56b1bc8a8f3b
yingyingzhang/my_work
[ "license:apache-2.0", "region:us" ]
2024-01-09T16:00:04+00:00
{"license": "apache-2.0"}
2024-01-10T05:27:15+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
3887a10a9b41bb350b6d8b015ec26d4365df41a9
# News Summary The summary is translated to hindi using IndicTrans2. We additionally remove duplicates from the [original dataset](https://huggingface.co/datasets/argilla/news-summary) **Usage**: Cross-lingual summarization
BhabhaAI/news-summary
[ "task_categories:summarization", "size_categories:10K<n<100K", "language:hi", "language:en", "license:cc-by-nc-4.0", "region:us" ]
2024-01-09T16:13:00+00:00
{"language": ["hi", "en"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["summarization"]}
2024-01-12T12:37:37+00:00
[]
[ "hi", "en" ]
TAGS #task_categories-summarization #size_categories-10K<n<100K #language-Hindi #language-English #license-cc-by-nc-4.0 #region-us
# News Summary The summary is translated to hindi using IndicTrans2. We additionally remove duplicates from the original dataset Usage: Cross-lingual summarization
[ "# News Summary\nThe summary is translated to hindi using IndicTrans2. \nWe additionally remove duplicates from the original dataset \n\nUsage: \nCross-lingual summarization" ]
[ "TAGS\n#task_categories-summarization #size_categories-10K<n<100K #language-Hindi #language-English #license-cc-by-nc-4.0 #region-us \n", "# News Summary\nThe summary is translated to hindi using IndicTrans2. \nWe additionally remove duplicates from the original dataset \n\nUsage: \nCross-lingual summarization" ]
[ 47, 38 ]
[ "passage: TAGS\n#task_categories-summarization #size_categories-10K<n<100K #language-Hindi #language-English #license-cc-by-nc-4.0 #region-us \n# News Summary\nThe summary is translated to hindi using IndicTrans2. \nWe additionally remove duplicates from the original dataset \n\nUsage: \nCross-lingual summarization" ]
3bbc4f5af2799c7e65b02a73da9b3ab69a93d16f
# Twitter User Dataset This dataset was obtained by crawling Twitter's REST API using the Python library Tweepy 3. The dataset comprises tweets from the 20 most popular Twitter users based on the number of followers, with retweets excluded. These accounts include public figures such as Katy Perry and Barack Obama, platforms like YouTube and Instagram, and television channels such as CNN Breaking News and The Ellen Show. The dataset presents a diverse collection of tweets, ranging from relatively structured and formal to completely unstructured and colloquial in style. Unfortunately, geocoordinates were not available for these tweets. ## Dataset Details - **Purpose:** The dataset has been utilized in the generation of a research paper titled "Machine Learning Techniques for Anomaly Detection in Post Arrays." - **Crawled Attributes:** - Author (Twitter User) - Content (Tweet) - Date_Time - ID (Twitter User ID) - Language (Tweet Language) - Number_of_Likes - Number_of_Shares ## Statistics - **Total Tweets:** 52,543 ### Top 20 Users | Screen_Name | #Tweets | Time Span (in days) | | -------------- | ------- | -------------------- | | TheEllenShow | 3,147 | 662 | | jimmyfallon | 3,123 | 1,231 | | ArianaGrande | 3,104 | 613 | | YouTube | 3,077 | 411 | | KimKardashian | 2,939 | 603 | | katyperry | 2,924 | 1,598 | | selenagomez | 2,913 | 2,266 | | rihanna | 2,877 | 1,557 | | BarackObama | 2,863 | 849 | | britneyspears | 2,776 | 1,548 | | instagram | 2,577 | 456 | | shakira | 2,530 | 1,850 | | Cristiano | 2,507 | 2,407 | | jtimberlake | 2,478 | 2,491 | | ladygaga | 2,329 | 894 | | Twitter | 2,290 | 2,593 | | ddlovato | 2,217 | 741 | | taylorswift13 | 2,029 | 2,091 | | justinbieber | 2,000 | 664 | | cnnbrk | 1,842 | 183 (2017) | If you have any questions or feedback, please contact the project maintainers: * 0x_hayden * Email: [email protected] ## Credits This project is maintained by: [<img src="https://avatars.githubusercontent.com/u/135024483?s=48&v=4" width="64" height="64" alt="Contributor Name">](https://github.com/code-glitchers) ### Contributors and Developers [<img src="https://avatars.githubusercontent.com/u/67865621?s=64&v=4" width="64" height="64" alt="Contributor Name">](https://github.com/mindglitchers) ## Support If you find this project helpful, consider buying us a coffee: [![Buy Me a Coffee](https://img.shields.io/badge/Buy%20Me%20a%20Coffee-%23FFDD00?style=for-the-badge&logo=ko-fi&logoColor=white)](https://ko-fi.com/ciph3r#pageMessageModal)
haydenbanz/Tweets_Dataset
[ "license:mit", "twitter", "region:us" ]
2024-01-09T16:19:26+00:00
{"license": "mit", "tags": ["twitter"]}
2024-01-09T16:24:18+00:00
[]
[]
TAGS #license-mit #twitter #region-us
Twitter User Dataset ==================== This dataset was obtained by crawling Twitter's REST API using the Python library Tweepy 3. The dataset comprises tweets from the 20 most popular Twitter users based on the number of followers, with retweets excluded. These accounts include public figures such as Katy Perry and Barack Obama, platforms like YouTube and Instagram, and television channels such as CNN Breaking News and The Ellen Show. The dataset presents a diverse collection of tweets, ranging from relatively structured and formal to completely unstructured and colloquial in style. Unfortunately, geocoordinates were not available for these tweets. Dataset Details --------------- * Purpose: The dataset has been utilized in the generation of a research paper titled "Machine Learning Techniques for Anomaly Detection in Post Arrays." * Crawled Attributes: + Author (Twitter User) + Content (Tweet) + Date\_Time + ID (Twitter User ID) + Language (Tweet Language) + Number\_of\_Likes + Number\_of\_Shares Statistics ---------- * Total Tweets: 52,543 ### Top 20 Users Screen\_Name: TheEllenShow, #Tweets: 3,147, Time Span (in days): 662 Screen\_Name: jimmyfallon, #Tweets: 3,123, Time Span (in days): 1,231 Screen\_Name: ArianaGrande, #Tweets: 3,104, Time Span (in days): 613 Screen\_Name: YouTube, #Tweets: 3,077, Time Span (in days): 411 Screen\_Name: KimKardashian, #Tweets: 2,939, Time Span (in days): 603 Screen\_Name: katyperry, #Tweets: 2,924, Time Span (in days): 1,598 Screen\_Name: selenagomez, #Tweets: 2,913, Time Span (in days): 2,266 Screen\_Name: rihanna, #Tweets: 2,877, Time Span (in days): 1,557 Screen\_Name: BarackObama, #Tweets: 2,863, Time Span (in days): 849 Screen\_Name: britneyspears, #Tweets: 2,776, Time Span (in days): 1,548 Screen\_Name: instagram, #Tweets: 2,577, Time Span (in days): 456 Screen\_Name: shakira, #Tweets: 2,530, Time Span (in days): 1,850 Screen\_Name: Cristiano, #Tweets: 2,507, Time Span (in days): 2,407 Screen\_Name: jtimberlake, #Tweets: 2,478, Time Span (in days): 2,491 Screen\_Name: ladygaga, #Tweets: 2,329, Time Span (in days): 894 Screen\_Name: Twitter, #Tweets: 2,290, Time Span (in days): 2,593 Screen\_Name: ddlovato, #Tweets: 2,217, Time Span (in days): 741 Screen\_Name: taylorswift13, #Tweets: 2,029, Time Span (in days): 2,091 Screen\_Name: justinbieber, #Tweets: 2,000, Time Span (in days): 664 Screen\_Name: cnnbrk, #Tweets: 1,842, Time Span (in days): 183 (2017) If you have any questions or feedback, please contact the project maintainers: * 0x\_hayden * Email: t5hlt8zcp@URL Credits ------- This project is maintained by: <img src="URL width="64" height="64" alt="Contributor Name"> ### Contributors and Developers <img src="URL width="64" height="64" alt="Contributor Name"> Support ------- If you find this project helpful, consider buying us a coffee: ![Buy Me a Coffee](URL
[ "### Top 20 Users\n\n\nScreen\\_Name: TheEllenShow, #Tweets: 3,147, Time Span (in days): 662\nScreen\\_Name: jimmyfallon, #Tweets: 3,123, Time Span (in days): 1,231\nScreen\\_Name: ArianaGrande, #Tweets: 3,104, Time Span (in days): 613\nScreen\\_Name: YouTube, #Tweets: 3,077, Time Span (in days): 411\nScreen\\_Name: KimKardashian, #Tweets: 2,939, Time Span (in days): 603\nScreen\\_Name: katyperry, #Tweets: 2,924, Time Span (in days): 1,598\nScreen\\_Name: selenagomez, #Tweets: 2,913, Time Span (in days): 2,266\nScreen\\_Name: rihanna, #Tweets: 2,877, Time Span (in days): 1,557\nScreen\\_Name: BarackObama, #Tweets: 2,863, Time Span (in days): 849\nScreen\\_Name: britneyspears, #Tweets: 2,776, Time Span (in days): 1,548\nScreen\\_Name: instagram, #Tweets: 2,577, Time Span (in days): 456\nScreen\\_Name: shakira, #Tweets: 2,530, Time Span (in days): 1,850\nScreen\\_Name: Cristiano, #Tweets: 2,507, Time Span (in days): 2,407\nScreen\\_Name: jtimberlake, #Tweets: 2,478, Time Span (in days): 2,491\nScreen\\_Name: ladygaga, #Tweets: 2,329, Time Span (in days): 894\nScreen\\_Name: Twitter, #Tweets: 2,290, Time Span (in days): 2,593\nScreen\\_Name: ddlovato, #Tweets: 2,217, Time Span (in days): 741\nScreen\\_Name: taylorswift13, #Tweets: 2,029, Time Span (in days): 2,091\nScreen\\_Name: justinbieber, #Tweets: 2,000, Time Span (in days): 664\nScreen\\_Name: cnnbrk, #Tweets: 1,842, Time Span (in days): 183 (2017)\n\n\nIf you have any questions or feedback, please contact the project maintainers:\n\n\n* 0x\\_hayden\n* Email: t5hlt8zcp@URL\n\n\nCredits\n-------\n\n\nThis project is maintained by:\n\n\n<img src=\"URL width=\"64\" height=\"64\" alt=\"Contributor Name\">", "### Contributors and Developers\n\n\n<img src=\"URL width=\"64\" height=\"64\" alt=\"Contributor Name\">\n\n\nSupport\n-------\n\n\nIf you find this project helpful, consider buying us a coffee:\n\n\n![Buy Me a Coffee](URL" ]
[ "TAGS\n#license-mit #twitter #region-us \n", "### Top 20 Users\n\n\nScreen\\_Name: TheEllenShow, #Tweets: 3,147, Time Span (in days): 662\nScreen\\_Name: jimmyfallon, #Tweets: 3,123, Time Span (in days): 1,231\nScreen\\_Name: ArianaGrande, #Tweets: 3,104, Time Span (in days): 613\nScreen\\_Name: YouTube, #Tweets: 3,077, Time Span (in days): 411\nScreen\\_Name: KimKardashian, #Tweets: 2,939, Time Span (in days): 603\nScreen\\_Name: katyperry, #Tweets: 2,924, Time Span (in days): 1,598\nScreen\\_Name: selenagomez, #Tweets: 2,913, Time Span (in days): 2,266\nScreen\\_Name: rihanna, #Tweets: 2,877, Time Span (in days): 1,557\nScreen\\_Name: BarackObama, #Tweets: 2,863, Time Span (in days): 849\nScreen\\_Name: britneyspears, #Tweets: 2,776, Time Span (in days): 1,548\nScreen\\_Name: instagram, #Tweets: 2,577, Time Span (in days): 456\nScreen\\_Name: shakira, #Tweets: 2,530, Time Span (in days): 1,850\nScreen\\_Name: Cristiano, #Tweets: 2,507, Time Span (in days): 2,407\nScreen\\_Name: jtimberlake, #Tweets: 2,478, Time Span (in days): 2,491\nScreen\\_Name: ladygaga, #Tweets: 2,329, Time Span (in days): 894\nScreen\\_Name: Twitter, #Tweets: 2,290, Time Span (in days): 2,593\nScreen\\_Name: ddlovato, #Tweets: 2,217, Time Span (in days): 741\nScreen\\_Name: taylorswift13, #Tweets: 2,029, Time Span (in days): 2,091\nScreen\\_Name: justinbieber, #Tweets: 2,000, Time Span (in days): 664\nScreen\\_Name: cnnbrk, #Tweets: 1,842, Time Span (in days): 183 (2017)\n\n\nIf you have any questions or feedback, please contact the project maintainers:\n\n\n* 0x\\_hayden\n* Email: t5hlt8zcp@URL\n\n\nCredits\n-------\n\n\nThis project is maintained by:\n\n\n<img src=\"URL width=\"64\" height=\"64\" alt=\"Contributor Name\">", "### Contributors and Developers\n\n\n<img src=\"URL width=\"64\" height=\"64\" alt=\"Contributor Name\">\n\n\nSupport\n-------\n\n\nIf you find this project helpful, consider buying us a coffee:\n\n\n![Buy Me a Coffee](URL" ]
[ 13, 591, 58 ]
[ "passage: TAGS\n#license-mit #twitter #region-us \n" ]
a9e76f761832d38aae47cdcd09342a71e38e35f9
A very small sampling of snippets of comedy routines by George Carlin and Tom Segura.
unalignment/comedy-snippets-v0.1
[ "license:cc-by-nc-4.0", "region:us" ]
2024-01-09T16:37:43+00:00
{"license": "cc-by-nc-4.0"}
2024-01-09T16:41:15+00:00
[]
[]
TAGS #license-cc-by-nc-4.0 #region-us
A very small sampling of snippets of comedy routines by George Carlin and Tom Segura.
[]
[ "TAGS\n#license-cc-by-nc-4.0 #region-us \n" ]
[ 17 ]
[ "passage: TAGS\n#license-cc-by-nc-4.0 #region-us \n" ]
6c8514835b6cc9a1af1d5a97b09a49801c3da0be
# Dataset Card for "bpo_binarized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jan-hq/bpo_binarized
[ "region:us" ]
2024-01-09T16:41:10+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 24175876, "num_examples": 14195}, {"name": "test", "num_bytes": 286944, "num_examples": 200}], "download_size": 13481262, "dataset_size": 24462820}}
2024-01-09T16:41:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bpo_binarized" More Information needed
[ "# Dataset Card for \"bpo_binarized\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bpo_binarized\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"bpo_binarized\"\n\nMore Information needed" ]
281caada7cfcf95e1c38c24dd2a92757d4c6819c
One thousand original sketches and their associated ground truth images from the [Sketchy Dataset](https://faculty.cc.gatech.edu/~hays/tmp/sketchy-database.pdf), plus four thousand augmented versions of these sketches, where strokes have a random offset of 15px in the x and y directions.
hxgrace/augmentedSketches
[ "region:us" ]
2024-01-09T16:46:42+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "sketch", "dtype": "image"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 339049273.0, "num_examples": 5000}], "download_size": 320275962, "dataset_size": 339049273.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-07T02:08:58+00:00
[]
[]
TAGS #region-us
One thousand original sketches and their associated ground truth images from the Sketchy Dataset, plus four thousand augmented versions of these sketches, where strokes have a random offset of 15px in the x and y directions.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
0e39ee3a2db8691b16b9dddba5195e70be87b562
# Dataset Card for "wikitext-2-raw-v1-shuffled" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/wikitext-2-raw-v1-shuffled
[ "region:us" ]
2024-01-09T17:02:38+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11018604, "num_examples": 36718}, {"name": "validation", "num_bytes": 1159288, "num_examples": 3760}], "download_size": 7055354, "dataset_size": 12177892}}
2024-01-09T17:02:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wikitext-2-raw-v1-shuffled" More Information needed
[ "# Dataset Card for \"wikitext-2-raw-v1-shuffled\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wikitext-2-raw-v1-shuffled\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"wikitext-2-raw-v1-shuffled\"\n\nMore Information needed" ]
d44db8e274bed52fab54c1f776ac05dd1316cab2
# Dataset Card for "wikitext-103-raw-v1-shuffled" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/wikitext-103-raw-v1-shuffled
[ "region:us" ]
2024-01-09T17:03:30+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 544387646, "num_examples": 1801350}, {"name": "validation", "num_bytes": 1159288, "num_examples": 3760}], "download_size": 316392316, "dataset_size": 545546934}}
2024-01-09T17:04:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wikitext-103-raw-v1-shuffled" More Information needed
[ "# Dataset Card for \"wikitext-103-raw-v1-shuffled\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wikitext-103-raw-v1-shuffled\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"wikitext-103-raw-v1-shuffled\"\n\nMore Information needed" ]
5c6fff61c3b89da054bab218f65130cbc47554c1
# Dataset Card for "gsm8k_sympy" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tfshaman/gsm8k_sympy
[ "region:us" ]
2024-01-09T17:04:25+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "gsm8k_id", "dtype": "int64"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "code_output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15440042, "num_examples": 4046}], "download_size": 5947292, "dataset_size": 15440042}}
2024-01-20T20:38:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for "gsm8k_sympy" More Information needed
[ "# Dataset Card for \"gsm8k_sympy\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"gsm8k_sympy\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"gsm8k_sympy\"\n\nMore Information needed" ]
a951f8fffb86bd4e7f4ca1ed0e70b7e9e5c7738d
This dataset contains the Czech subset of the [`wikimedia/wikipedia`](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. Each page is divided into paragraphs, stored as a list in the `chunks` column. For every paragraph, embeddings are created using the [`intfloat/multilingual-e5-large`](https://huggingface.co/intfloat/multilingual-e5-large) model. ## Usage Load the dataset: ```python from datasets import load_dataset ds = load_dataset("karmiq/wikipedia-embeddings-cs-e5-large", split="train") ds[1] ``` ``` { 'id': '1', 'url': 'https://cs.wikipedia.org/wiki/Astronomie', 'title': 'Astronomie', 'chunks': [ 'Astronomie, řecky αστρονομία z άστρον ( astron ) hvězda a νόμος ( nomos )...', 'Myšlenky Aristotelovy rozvinul ve 2. století našeho letopočtu Klaudios Ptolemaios...', ..., ], 'embeddings': [ [0.09006806463003159, -0.009814552962779999, ...], [0.10767366737127304, ...], ... ] } ``` The structure makes it easy to use the dataset for implementing semantic search. <details> <summary>Load the data in Elasticsearch</summary> ```python def doc_generator(data, batch_size=1000): for batch in data.with_format("numpy").iter(batch_size): for i, id in enumerate(batch["id"]): output = {"id": id} output["title"] = batch["title"][i] output["url"] = batch["url"][i] output["parts"] = [ { "chunk": chunk, "embedding": embedding } for chunk, embedding in zip(batch["chunks"][i], batch["embeddings"][i]) ] yield output num_indexed, num_failed = 0, 0, progress = tqdm(total=ds.num_rows, unit="doc", desc="Indexing") for ok, info in parallel_bulk( es, index="wikipedia-search", actions=doc_generator(ds), raise_on_error=False, ): if not ok: print(f"ERROR {info['index']['status']}: " f"{info['index']['error']['type']}: {info['index']['error']['caused_by']['type']}: " f"{info['index']['error']['caused_by']['reason'][:250]}") progress.update(1) ``` </details> <details> <summary>Use <code>sentence_transformers.util.semantic_search</code></summary> ```python import sentence_transformers model = sentence_transformers.SentenceTransformer("intfloat/multilingual-e5-large") ds.set_format(type="torch", columns=["embeddings"], output_all_columns=True) # Flatten the dataset def explode_sequence(batch): output = { "id": [], "url": [], "title": [], "chunk": [], "embedding": [] } for id, url, title, chunks, embeddings in zip( batch["id"], batch["url"], batch["title"], batch["chunks"], batch["embeddings"] ): output["id"].extend([id for _ in range(len(chunks))]) output["url"].extend([url for _ in range(len(chunks))]) output["title"].extend([title for _ in range(len(chunks))]) output["chunk"].extend(chunks) output["embedding"].extend(embeddings) return output ds_flat = ds.map( explode_sequence, batched=True, remove_columns=ds.column_names, num_proc=min(os.cpu_count(), 32), desc="Flatten") ds_flat query = "Čím se zabývá fyzika?" hits = sentence_transformers.util.semantic_search( query_embeddings=model.encode(query), corpus_embeddings=ds_flat["embedding"], top_k=10) for hit in hits[0]: title = ds_flat[hit['corpus_id']]['title'] chunk = ds_flat[hit['corpus_id']]['chunk'] print(f"[{hit['score']:0.2f}] {textwrap.shorten(chunk, width=100, placeholder='…')} [{title}]") # [0.90] Fyzika částic ( též částicová fyzika ) je oblast fyziky, která se zabývá částicemi. V širším smyslu… [Fyzika částic] # [0.89] Fyzika ( z řeckého φυσικός ( fysikos ): přírodní, ze základu φύσις ( fysis ): příroda, archaicky… [Fyzika] # ... ``` </details> The embeddings generation took about 6 hours on an NVIDIA A100 80GB GPU. ## License See license of the original dataset: <https://huggingface.co/datasets/wikimedia/wikipedia>.
karmiq/wikipedia-embeddings-cs-e5-large
[ "task_categories:text-generation", "task_categories:fill-mask", "size_categories:100K<n<1M", "language:cs", "license:cc-by-sa-3.0", "license:gfdl", "region:us" ]
2024-01-09T17:36:08+00:00
{"language": ["cs"], "license": ["cc-by-sa-3.0", "gfdl"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "fill-mask"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "chunks", "sequence": "string"}, {"name": "embeddings", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 6167551972, "num_examples": 534044}], "download_size": 5897354237, "dataset_size": 6167551972}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-17T06:38:35+00:00
[]
[ "cs" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #size_categories-100K<n<1M #language-Czech #license-cc-by-sa-3.0 #license-gfdl #region-us
This dataset contains the Czech subset of the 'wikimedia/wikipedia' dataset. Each page is divided into paragraphs, stored as a list in the 'chunks' column. For every paragraph, embeddings are created using the 'intfloat/multilingual-e5-large' model. ## Usage Load the dataset: The structure makes it easy to use the dataset for implementing semantic search. <details> <summary>Load the data in Elasticsearch</summary> </details> <details> <summary>Use <code>sentence_transformers.util.semantic_search</code></summary> </details> The embeddings generation took about 6 hours on an NVIDIA A100 80GB GPU. ## License See license of the original dataset: <URL
[ "## Usage\n\nLoad the dataset:\n\n\n\n\n\nThe structure makes it easy to use the dataset for implementing semantic search.\n\n<details>\n<summary>Load the data in Elasticsearch</summary>\n\n\n</details>\n\n<details>\n<summary>Use <code>sentence_transformers.util.semantic_search</code></summary>\n\n\n</details>\n\nThe embeddings generation took about 6 hours on an NVIDIA A100 80GB GPU.", "## License\n\nSee license of the original dataset: <URL" ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #size_categories-100K<n<1M #language-Czech #license-cc-by-sa-3.0 #license-gfdl #region-us \n", "## Usage\n\nLoad the dataset:\n\n\n\n\n\nThe structure makes it easy to use the dataset for implementing semantic search.\n\n<details>\n<summary>Load the data in Elasticsearch</summary>\n\n\n</details>\n\n<details>\n<summary>Use <code>sentence_transformers.util.semantic_search</code></summary>\n\n\n</details>\n\nThe embeddings generation took about 6 hours on an NVIDIA A100 80GB GPU.", "## License\n\nSee license of the original dataset: <URL" ]
[ 64, 107, 12 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #size_categories-100K<n<1M #language-Czech #license-cc-by-sa-3.0 #license-gfdl #region-us \n## Usage\n\nLoad the dataset:\n\n\n\n\n\nThe structure makes it easy to use the dataset for implementing semantic search.\n\n<details>\n<summary>Load the data in Elasticsearch</summary>\n\n\n</details>\n\n<details>\n<summary>Use <code>sentence_transformers.util.semantic_search</code></summary>\n\n\n</details>\n\nThe embeddings generation took about 6 hours on an NVIDIA A100 80GB GPU.## License\n\nSee license of the original dataset: <URL" ]
65da1f1ded86b56f59b401de25236966ad7e6d21
# Dataset Card for "pippa_deduped_detoxify_score" This dataset is produced from using Detoxify (https://github.com/unitaryai/detoxify) on the dataset: - PygmalionAI/PIPPA (deduped version only) On cursory review, there are some outliers: - content not marked for toxicity - content marked for toxicity incorrectly - some content marked with high scores that doesn't seem toxic - some content not marked when clearly offensive However, the bulk seems to be fairly right on the mark, so I'm releasing this version with potential for updates and improvements later, in case this is useful to someone.
jtatman/pippa_deduped_detoxify_score
[ "task_categories:text-classification", "task_categories:feature-extraction", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "detoxify", "not-for-all-audiences", "sentiment analysis", "toxic chat", "region:us" ]
2024-01-09T17:45:48+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "feature-extraction"], "pretty_name": "pippa_detoxify_scores", "dataset_info": {"features": [{"name": "conversations", "dtype": "string"}, {"name": "toxicity", "dtype": "float64"}, {"name": "severe_toxicity", "dtype": "float64"}, {"name": "obscene", "dtype": "float64"}, {"name": "threat", "dtype": "float64"}, {"name": "insult", "dtype": "float64"}, {"name": "identity_attack", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 4188597, "num_examples": 15298}], "download_size": 2110062, "dataset_size": 4188597}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["detoxify", "not-for-all-audiences", "sentiment analysis", "toxic chat"]}
2024-01-09T18:00:06+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-feature-extraction #size_categories-10K<n<100K #language-English #license-apache-2.0 #detoxify #not-for-all-audiences #sentiment analysis #toxic chat #region-us
# Dataset Card for "pippa_deduped_detoxify_score" This dataset is produced from using Detoxify (URL on the dataset: - PygmalionAI/PIPPA (deduped version only) On cursory review, there are some outliers: - content not marked for toxicity - content marked for toxicity incorrectly - some content marked with high scores that doesn't seem toxic - some content not marked when clearly offensive However, the bulk seems to be fairly right on the mark, so I'm releasing this version with potential for updates and improvements later, in case this is useful to someone.
[ "# Dataset Card for \"pippa_deduped_detoxify_score\"\n\nThis dataset is produced from using Detoxify (URL on the dataset:\n- PygmalionAI/PIPPA (deduped version only)\n\nOn cursory review, there are some outliers:\n- content not marked for toxicity\n- content marked for toxicity incorrectly\n- some content marked with high scores that doesn't seem toxic\n- some content not marked when clearly offensive\n\nHowever, the bulk seems to be fairly right on the mark, so I'm releasing this version with potential for updates and improvements later, in case this is useful to someone." ]
[ "TAGS\n#task_categories-text-classification #task_categories-feature-extraction #size_categories-10K<n<100K #language-English #license-apache-2.0 #detoxify #not-for-all-audiences #sentiment analysis #toxic chat #region-us \n", "# Dataset Card for \"pippa_deduped_detoxify_score\"\n\nThis dataset is produced from using Detoxify (URL on the dataset:\n- PygmalionAI/PIPPA (deduped version only)\n\nOn cursory review, there are some outliers:\n- content not marked for toxicity\n- content marked for toxicity incorrectly\n- some content marked with high scores that doesn't seem toxic\n- some content not marked when clearly offensive\n\nHowever, the bulk seems to be fairly right on the mark, so I'm releasing this version with potential for updates and improvements later, in case this is useful to someone." ]
[ 73, 146 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-feature-extraction #size_categories-10K<n<100K #language-English #license-apache-2.0 #detoxify #not-for-all-audiences #sentiment analysis #toxic chat #region-us \n# Dataset Card for \"pippa_deduped_detoxify_score\"\n\nThis dataset is produced from using Detoxify (URL on the dataset:\n- PygmalionAI/PIPPA (deduped version only)\n\nOn cursory review, there are some outliers:\n- content not marked for toxicity\n- content marked for toxicity incorrectly\n- some content marked with high scores that doesn't seem toxic\n- some content not marked when clearly offensive\n\nHowever, the bulk seems to be fairly right on the mark, so I'm releasing this version with potential for updates and improvements later, in case this is useful to someone." ]